Hacker News
Linking pages
- OpenAI is too cheap to beat https://generatingconversation.substack.com/p/openai-is-too-cheap-to-beat 412 comments
- Gorilla: An LLM for Massive APIs https://generatingconversation.substack.com/p/gorilla-an-llm-for-massive-apis 0 comments
- Fine tuning is just synthetic data engineering https://generatingconversation.substack.com/p/fine-tuning-is-just-synthetic-data 0 comments
- OpenAI is too cheap to beat (redux) https://generatingconversation.substack.com/p/openai-is-too-cheap-to-beat-redux 0 comments
Linked pages
- GitHub - microsoft/LoRA: Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" https://github.com/microsoft/LoRA 156 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- Gorilla https://gorilla.cs.berkeley.edu/ 121 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | LMSYS Org https://lmsys.org/blog/2023-03-30-vicuna/ 7 comments
- Fine-Tuning LLMs: LoRA or Full-Parameter? An in-depth Analysis with Llama 2 https://www.anyscale.com/blog/fine-tuning-llms-lora-or-full-parameter-an-in-depth-analysis-with-llama-2 2 comments
- Should you fine-tune a model? https://generatingconversation.substack.com/p/should-you-fine-tune-a-model 0 comments
Related searches:
Search whole site: site:generatingconversation.substack.com
Search title: LoRA, explained
See how to search.