Linking pages
- How To Finetune GPT Like Large Language Models on a Custom Dataset - Lightning AI https://lightning.ai/pages/blog/how-to-finetune-gpt-like-large-language-models-on-a-custom-dataset/ 122 comments
- Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch - Lightning AI https://lightning.ai/pages/community/tutorial/pytorch-memory-vit-llm/ 0 comments
- Scaling Large (Language) Models with PyTorch Lightning - Lightning AI https://lightning.ai/blog/scaling-large-language-models-with-pytorch-lightning/ 0 comments
- [AINews] Claude 3 is officially America's Next Top Model • Buttondown https://buttondown.email/ainews/archive/ainews-claude-3-is-officially-americas-next-top/ 0 comments
Linked pages
- Introducing LLaMA: A foundational, 65-billion-parameter language model https://ai.facebook.com/blog/large-language-model-llama-meta-ai/ 204 comments
- GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. https://github.com/Lightning-AI/lit-llama 69 comments
- [2208.07339] LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale https://arxiv.org/abs/2208.07339 33 comments
- Single-precision floating-point format - Wikipedia https://en.wikipedia.org/wiki/Single-precision_floating-point_format 13 comments
- [2210.17323] GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers https://arxiv.org/abs/2210.17323 0 comments
Related searches:
Search whole site: site:lightning.ai
Search title: Accelerating Large Language Models with Mixed-Precision Techniques - Lightning AI
See how to search.