Linking pages
- All about evaluating Large language models https://explodinggradients.com/all-about-evaluating-large-language-models 8 comments
- Emergent Abilities of LLMs 🪴 https://willthompson.name/emergent-abilities-of-llms 1 comment
- The $250K Inverse Scaling Prize and Human-AI Alignment https://www.surgehq.ai/blog/the-250k-inverse-scaling-prize-and-human-ai-alignment 0 comments
- Why I Think More NLP Researchers Should Engage with AI Safety Concerns – NYU Alignment Research Group https://wp.nyu.edu/arg/why-ai-safety/ 0 comments
- The State of Competitive Machine Learning | ML Contests https://mlcontests.com/state-of-competitive-machine-learning-2022/ 0 comments
- GPT-4 vs GPT-3: What you should know. https://explodinggradients.com/gpt-4-vs-gpt-3-what-you-should-know 0 comments
- Scaling in the service of reasoning & model-based ML - Yoshua Bengio https://yoshuabengio.org/2023/03/21/scaling-in-the-service-of-reasoning-model-based-ml/ 0 comments
- GitHub - Mooler0410/LLMsPracticalGuide: A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers) https://github.com/Mooler0410/LLMsPracticalGuide 0 comments
- Intelligence as efficient model building | Alex’s blog https://atelfo.github.io/2023/05/17/intelligence-as-efficient-model-building.html 0 comments
- Truth https://compphil.github.io/truth/ 0 comments
- Emergent Abilities of LLMs 🪴 https://willthompson.name/emergent-abilities-of-llms?twclid=2-2idcp21yniy3x9291jd983b3o 0 comments
Linked pages
- [2005.14165] Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 201 comments
- Home \ Anthropic https://www.anthropic.com/ 48 comments
- Aligning language models to follow instructions https://openai.com/blog/instruction-following/ 11 comments
- Google Colab https://research.google.com/colaboratory/faq.html#usage-limits 7 comments
- [2109.07958] TruthfulQA: Measuring How Models Mimic Human Falsehoods https://arxiv.org/abs/2109.07958 7 comments
- [2105.11447] True Few-Shot Learning with Language Models https://arxiv.org/abs/2105.11447 7 comments
- [2106.13219] Towards Understanding and Mitigating Social Biases in Language Models https://arxiv.org/abs/2106.13219 2 comments
- https://irmckenzie.co.uk/round2 1 comment
- [2204.05862] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback https://arxiv.org/abs/2204.05862 1 comment
- [2203.15556] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 0 comments
- [2001.08361] Scaling Laws for Neural Language Models https://arxiv.org/abs/2001.08361 0 comments
- [2204.02311] PaLM: Scaling Language Modeling with Pathways https://arxiv.org/abs/2204.02311 0 comments
- GitHub - google/BIG-bench: Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models https://github.com/google/BIG-bench 0 comments
- [2102.01293] Scaling Laws for Transfer https://arxiv.org/abs/2102.01293 0 comments
- [2206.04615] Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models https://arxiv.org/abs/2206.04615 0 comments
- [1606.06031] The LAMBADA dataset: Word prediction requiring a broad discourse context https://arxiv.org/abs/1606.06031 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - inverse-scaling/prize: A prize for finding tasks that cause large language models to show inverse scaling
See how to search.