Linking pages
- State of Open Source AI Book - 2023 Edition — State of Open Source AI Book https://book.premai.io/state-of-open-source-ai/index.html 0 comments
- Explaining ChatGPT to Anyone in <20 Minutes https://cameronrwolfe.substack.com/p/explaining-chatgpt-to-anyone-in-20 0 comments
- Model Merging: A Survey - by Cameron R. Wolfe, Ph.D. https://cameronrwolfe.substack.com/p/model-merging 0 comments
Linked pages
- GPT-4 https://openai.com/research/gpt-4 5744 comments
- fast.ai – fast.ai—Making neural nets uncool again https://fast.ai 382 comments
- Introducing ChatGPT https://openai.com/blog/chatgpt/ 296 comments
- Practical Deep Learning for Coders - Practical Deep Learning https://course.fast.ai 177 comments
- Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4 https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 51 comments
- Mosaic LLMs (Part 2): GPT-3 quality for <$500k https://www.mosaicml.com/blog/gpt-3-quality-for-500k 7 comments
- [2303.08774] GPT-4 Technical Report https://arxiv.org/abs/2303.08774 1 comment
- tatsu-lab/alpaca · Datasets at Hugging Face https://huggingface.co/datasets/tatsu-lab/alpaca 1 comment
- Parameter-Efficient Fine-Tuning using 🤗 PEFT https://huggingface.co/blog/peft 0 comments
- LLaMA: LLMs for Everyone! - by Cameron R. Wolfe https://cameronrwolfe.substack.com/p/llama-llms-for-everyone 0 comments
- Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA) https://sebastianraschka.com/blog/2023/llm-finetuning-lora.html 0 comments
- [2304.12244] WizardLM: Empowering Large Language Models to Follow Complex Instructions https://arxiv.org/abs/2304.12244 0 comments
- LLaMA-2 from the Ground Up - by Cameron R. Wolfe, Ph.D. https://cameronrwolfe.substack.com/p/llama-2-from-the-ground-up 0 comments
- Deci/DeciCoder-1b · Hugging Face https://huggingface.co/Deci/DeciCoder-1b 0 comments
- GitHub - jondurbin/airoboros: Customizable implementation of the self-instruct paper. https://github.com/jondurbin/airoboros 0 comments
- [2309.00267] RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback https://arxiv.org/abs/2309.00267 0 comments
Related searches:
Search whole site: site:cameronrwolfe.substack.com
Search title: Understanding and Using Supervised Fine-Tuning (SFT) for Language Models
See how to search.