Hacker News
- Takeaways from hundreds of LLM finetuning experiments with LoRA https://lightning.ai/pages/community/lora-insights/ 39 comments
Linking pages
- Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation) https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms 37 comments
- GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks. https://github.com/mlabonne/llm-course 10 comments
- 📄 NeurIPS 2023 Primer - by Sebastian Ruder - NLP News https://nlpnewsletter.substack.com/p/neurips-2023-primer 0 comments
- NeurIPS 2023 Recap — Best Papers - by swyx - Latent Space https://www.latent.space/p/neurips-2023-papers 0 comments
- (Opinionated) Guide to ML Engineer Job Hunting | Yuan Meng https://www.yuan-meng.com/posts/mle_interviews/ 0 comments
Linked pages
- [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs https://arxiv.org/abs/2305.14314 129 comments
- [2310.06825] Mistral 7B https://arxiv.org/abs/2310.06825 124 comments
- Falcon LLM - Home https://falconllm.tii.ae/ 87 comments
- [2309.05463] Textbooks Are All You Need II: phi-1.5 technical report https://arxiv.org/abs/2309.05463 65 comments
- LLM Training: RLHF and Its Alternatives https://magazine.sebastianraschka.com/p/llm-training-rlhf-and-its-alternatives 14 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- NeurIPS Large Language Model Efficiency Challenge:1 LLM + 1GPU + 1Day | NeurIPS 2023 Challenge https://llm-efficiency-challenge.github.io/index 5 comments
- [1711.05101] Decoupled Weight Decay Regularization https://arxiv.org/abs/1711.05101 0 comments
- GitHub - google/sentencepiece: Unsupervised text tokenizer for Neural Network-based text generation. https://github.com/google/sentencepiece 0 comments
- GitHub - gururise/AlpacaDataCleaned: Alpaca dataset from Stanford, cleaned and curated https://github.com/gururise/AlpacaDataCleaned 0 comments
- Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) - Lightning AI https://lightning.ai/pages/community/tutorial/lora-llm/ 0 comments
- [2307.09288] Llama 2: Open Foundation and Fine-Tuned Chat Models https://arxiv.org/abs/2307.09288 0 comments
- GitHub - EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of autoregressive language models. https://github.com/EleutherAI/lm-evaluation-harness 0 comments
Related searches:
Search whole site: site:lightning.ai
Search title: Finetuning LLMs with LoRA and QLoRA: Insights from Hundreds of Experiments - Lightning AI
See how to search.