[실전 LLM 파인튜닝] Day8 Chapter 03 Fine Tuning, 3.4.5 - 3.4.7
3.4.5 Generate Keyword dataChoosing the right batch_sizeModel Size and Memory (RAM/VRAM)• Smaller models (e.g., BERT) can handle larger batch sizes without running out of memory.• Larger models (e.g., GPT-based models) require more memory to process each batch. A smaller batch_size reduces the memory footprint.2. Performance and Throughput• A larger batch_size improves throughput because the mod..
Books
2025. 1. 11. 01:07
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- LLM
- Kubernetes
- book
- collator
- Fine-Tuning
- Container
- leetcode
- 책리뷰
- Git
- K8S
- palindrome
- docker
- go
- AWS
- Binary
- csv
- lllm
- error
- Shell
- 한빛미디어
- kubens
- 키보드
- 나는리뷰어다
- feed-forward
- kubernetes context
- Python
- 파이썬
- Algorithm
- BASIC
- Gemma
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함
반응형