[실전 LLM 파인튜닝] Day8 Chapter 03 Fine Tuning, 3.4.5 - 3.4.7
3.4.5 Generate Keyword dataChoosing the right batch_sizeModel Size and Memory (RAM/VRAM)• Smaller models (e.g., BERT) can handle larger batch sizes without running out of memory.• Larger models (e.g., GPT-based models) require more memory to process each batch. A smaller batch_size reduces the memory footprint.2. Performance and Throughput• A larger batch_size improves throughput because the mod..
Books
2025. 1. 11. 01:07
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- kubens
- Algorithm
- Gemma
- 키보드
- 책리뷰
- book
- Git
- 파이썬
- go
- AWS
- error
- K8S
- Fine-Tuning
- palindrome
- collator
- kubernetes context
- docker
- Container
- Shell
- lllm
- csv
- leetcode
- 한빛미디어
- feed-forward
- BASIC
- Binary
- Kubernetes
- LLM
- 나는리뷰어다
- Python
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함
반응형