Linking pages
Linked pages
- Introducing Gemini 1.5, Google's next-generation AI model https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/ 715 comments
- Introducing the next generation of Claude \ Anthropic https://www.anthropic.com/news/claude-3-family 704 comments
- The real research behind the wild rumors about OpenAI’s Q* project | Ars Technica https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project/ 44 comments
- GitHub - guidance-ai/guidance: A guidance language for controlling large language models. https://github.com/guidance-ai/guidance 41 comments
- RAFT (Retrieval Augmented Fine-tuning): A new way to teach LLMs (Large Language Models) to be better at RAG (Retrieval Augmented Generation) https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/raft-a-new-way-to-teach-llms-to-be-better-at-rag/ba-p/4084674 13 comments
- Google Knowledge Graph - Wikipedia https://en.wikipedia.org/wiki/Knowledge_Graph 9 comments
- [2106.09685] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 8 comments
- Propositional calculus - Wikipedia http://en.wikipedia.org/wiki/Propositional_calculus 8 comments
- Introducing Devin, the first AI software engineer https://www.cognition-labs.com/introducing-devin 8 comments
- Greg Kamradt on X: "Pressure Testing GPT-4-128K With Long Context Recall 128K tokens of context is awesome - but what's performance like? I wanted to find out so I did a “needle in a haystack” analysis Some expected (and unexpected) results Here's what I found: Findings: * GPT-4’s recall… https://t.co/nHMokmfhW5" / X https://twitter.com/GregKamradt/status/1722386725635580292 1 comment
- [2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks https://arxiv.org/abs/2005.11401 0 comments
- Few-Shot Prompting | Prompt Engineering Guide https://www.promptingguide.ai/techniques/fewshot 0 comments
- [2307.03172] Lost in the Middle: How Language Models Use Long Contexts https://arxiv.org/abs/2307.03172 0 comments
- [2309.00267] RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback https://arxiv.org/abs/2309.00267 0 comments
- GitHub - gkamradt/LLMTest_NeedleInAHaystack: Doing simple retrieval from LLM models at various context lengths to measure accuracy https://github.com/gkamradt/LLMTest_NeedleInAHaystack 0 comments
Related searches:
Search whole site: site:brettdidonato.substack.com
Search title: Can We Prevent LLMs From Hallucinating? - Brett DiDonato
See how to search.