Linking pages
- How To Finetune GPT Like Large Language Models on a Custom Dataset - Lightning AI https://lightning.ai/pages/blog/how-to-finetune-gpt-like-large-language-models-on-a-custom-dataset/ 122 comments
- Finetuning LLMs with LoRA and QLoRA: Insights from Hundreds of Experiments - Lightning AI https://lightning.ai/pages/community/lora-insights/ 39 comments
- Learn AI — Senko Rašić https://blog.senko.net/learn-ai 0 comments
- What is low-rank adaptation (LoRA)? - TechTalks https://bdtechtalks.com/2023/05/22/what-is-lora/ 0 comments
- Fine-tuning OpenLLaMA-7B with QLoRA for instruction following | Jou-ching (George) Sung https://georgesung.github.io/ai/qlora-ift/ 0 comments
- Scaling Large (Language) Models with PyTorch Lightning - Lightning AI https://lightning.ai/blog/scaling-large-language-models-with-pytorch-lightning/ 0 comments
Linked pages
- dolly/data at master · databrickslabs/dolly · GitHub https://github.com/databrickslabs/dolly/tree/master/data 89 comments
- GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. https://github.com/Lightning-AI/lit-llama 69 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- GitHub - tatsu-lab/stanford_alpaca https://github.com/tatsu-lab/stanford_alpaca 2 comments
- [2104.08691] The Power of Scale for Parameter-Efficient Prompt Tuning https://arxiv.org/abs/2104.08691 1 comment
- [2212.10560] Self-Instruct: Aligning Language Model with Self Generated Instructions https://arxiv.org/abs/2212.10560 1 comment
Related searches:
Search whole site: site:lightning.ai
Search title: Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) - Lightning AI
See how to search.