Linking pages
Linked pages
- dolly/data at master · databrickslabs/dolly · GitHub https://github.com/databrickslabs/dolly/tree/master/data 89 comments
- GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. https://github.com/Lightning-AI/lit-llama 69 comments
- Machine Learning Q… by Sebastian Raschka, PhD [PDF/iPad/Kindle] https://leanpub.com/machine-learning-q-and-ai 12 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters https://sebastianraschka.com/blog/2023/llm-finetuning-llama-adapter.html 4 comments
- GitHub - tatsu-lab/stanford_alpaca https://github.com/tatsu-lab/stanford_alpaca 2 comments
- [2104.08691] The Power of Scale for Parameter-Efficient Prompt Tuning https://arxiv.org/abs/2104.08691 1 comment
- [2212.10560] Self-Instruct: Aligning Language Model with Self Generated Instructions https://arxiv.org/abs/2212.10560 1 comment
Related searches:
Search whole site: site:sebastianraschka.com
Search title: Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA)
See how to search.