Linking pages
- Gemma Scope: helping the safety community shed light on the inner workings of language models - Google DeepMind https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/ 4 comments
- OpenAI peeks into the “black box” of neural networks with new research | Ars Technica https://arstechnica.com/information-technology/2023/05/openai-peeks-into-the-black-box-of-neural-networks-with-new-research/ 2 comments
- GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources. https://github.com/JShollaj/awesome-llm-interpretability 1 comment
- 24 - Superalignment with Jan Leike | AXRP - the AI X-risk Research Podcast https://axrp.net/episode/2023/07/27/episode-24-superalignment-jan-leike.html 1 comment
- Scaling Automatic Neuron Description | Transluce AI https://transluce.org/neuron-descriptions 1 comment
- When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces — Smashing Magazine https://www.smashingmagazine.com/2024/02/designing-ai-beyond-conversational-interfaces/ 0 comments
Related searches:
Search whole site: site:openaipublic.blob.core.windows.net
Search title: Language models can explain neurons in language models
See how to search.