Linking pages
Linked pages
- [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs https://arxiv.org/abs/2305.14314 129 comments
- Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation) https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms 37 comments
- [2306.02707] Orca: Progressive Learning from Complex Explanation Traces of GPT-4 https://arxiv.org/abs/2306.02707 33 comments
- Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA https://huggingface.co/blog/4bit-transformers-bitsandbytes 15 comments
- mistralai/Mistral-7B-Instruct-v0.2 · Hugging Face https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 3 comments
- tatsu-lab/alpaca · Datasets at Hugging Face https://huggingface.co/datasets/tatsu-lab/alpaca 1 comment
- tiiuae/falcon-40b · Hugging Face https://huggingface.co/tiiuae/falcon-40b 1 comment
- Introducing the Hugging Face LLM Inference Container for Amazon SageMaker https://huggingface.co/blog/sagemaker-huggingface-llm 1 comment
- Deploy LLMs with Hugging Face Inference Endpoints https://huggingface.co/blog/inference-endpoints-llm 0 comments
Related searches:
Search whole site: site:philschmid.de
Search title: How to Fine-Tune LLMs in 2024 with Hugging Face
See how to search.