Linking pages
Linked pages
- [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs https://arxiv.org/abs/2305.14314 129 comments
- [2305.11206] LIMA: Less Is More for Alignment https://arxiv.org/abs/2305.11206 44 comments
- [2306.02707] Orca: Progressive Learning from Complex Explanation Traces of GPT-4 https://arxiv.org/abs/2306.02707 33 comments
- Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA https://huggingface.co/blog/4bit-transformers-bitsandbytes 15 comments
- [2205.14135] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness https://arxiv.org/abs/2205.14135 3 comments
- GitHub - tatsu-lab/stanford_alpaca https://github.com/tatsu-lab/stanford_alpaca 2 comments
- tatsu-lab/alpaca · Datasets at Hugging Face https://huggingface.co/datasets/tatsu-lab/alpaca 1 comment
- https://arxiv.org/abs/2203.02155 0 comments
Related searches:
Search whole site: site:www.philschmid.de
Search title: Extended Guide: Instruction-tune Llama 2
See how to search.