- [P] Recapping recent LLM research concerning tuning strategies & data efficiency https://magazine.sebastianraschka.com/p/ahead-of-ai-9-llm-tuning-and-dataset/ 4 comments machinelearning
Linking pages
Linked pages
- OpenAI may leave the EU if regulations bite - CEO | Reuters https://www.reuters.com/technology/openai-may-leave-eu-if-regulations-bite-ceo-2023-05-24/ 565 comments
- Japan Goes All In: Copyright Doesn't Apply To AI Training https://technomancers.ai/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/ 498 comments
- GitHub - nomic-ai/gpt4all: gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue https://github.com/nomic-ai/gpt4all 325 comments
- fairseq/examples/mms at main · facebookresearch/fairseq · GitHub https://github.com/facebookresearch/fairseq/tree/main/examples/mms 222 comments
- GPT4All Chat https://gpt4all.io/index.html 182 comments
- How Rogue AIs may Arise - Yoshua Bengio https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ 142 comments
- [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs https://arxiv.org/abs/2305.14314 129 comments
- GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision https://github.com/openai/whisper/ 126 comments
- [2305.15717] The False Promise of Imitating Proprietary LLMs https://arxiv.org/abs/2305.15717 119 comments
- GitHub - Lightning-AI/lit-llama: Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. https://github.com/Lightning-AI/lit-llama 69 comments
- Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4 https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 51 comments
- [2305.11206] LIMA: Less Is More for Alignment https://arxiv.org/abs/2305.11206 44 comments
- Deep Learning Fundamentals - Lightning AI https://lightning.ai/pages/courses/deep-learning-fundamentals/ 12 comments
- Machine Learning Q… by Sebastian Raschka, PhD [PDF/iPad/Kindle] https://leanpub.com/machine-learning-q-and-ai 12 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- [2305.18290] Direct Preference Optimization: Your Language Model is Secretly a Reward Model https://arxiv.org/abs/2305.18290 8 comments
- [2109.01652] Finetuned Language Models Are Zero-Shot Learners https://arxiv.org/abs/2109.01652 4 comments
- AI Research Highlights In 3 Sentences Or Less (April-May 2023) https://magazine.sebastianraschka.com/p/ai-research-highlights-in-3-sentences 4 comments
- Lightning AI https://lightning.ai/ 1 comment
- [2305.14201] Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks https://arxiv.org/abs/2305.14201 1 comment
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:magazine.sebastianraschka.com
Search title: Ahead of AI #9: LLM Tuning & Dataset Perspectives
See how to search.