[실전 LLM 파인튜닝] Day6 Fine Tuning, 3.3
3.3 GPU ParallelSome use casesData ParallelismModel ParallelismPipeline ParallelismTensor Parallelism3.3.1 Data ParallelismProcessSplit datasetDistribute dataPerform OperationsAggregate ResultsProsSame weight across all processing unitsGradients are synchronized across all unitsConsCommunication overheadUnable to use VRAM evenlyData replications might be neededSmall dataset3.3.2 Model Parallelis..
Books
2025. 1. 6. 23:19
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- 책리뷰
- book
- 파이썬
- collator
- Container
- Kubernetes
- error
- kubens
- 키보드
- AWS
- lllm
- Git
- Fine-Tuning
- Binary
- Gemma
- Python
- LLM
- palindrome
- BASIC
- 나는리뷰어다
- feed-forward
- Algorithm
- K8S
- csv
- Shell
- kubernetes context
- leetcode
- go
- 한빛미디어
- docker
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함
반응형