Linking pages
Linked pages
- Language models can explain neurons in language models https://openai.com/research/language-models-can-explain-neurons-in-language-models 530 comments
- The Pile http://pile.eleuther.ai/ 294 comments
- https://openai.com/index/extracting-concepts-from-gpt-4/ 143 comments
- Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet https://transformer-circuits.pub/2024/scaling-monosemanticity/ 135 comments
- Introducing text and code embeddings https://openai.com/blog/introducing-text-and-code-embeddings/ 80 comments
- Using Artificial Intelligence to Augment Human Intelligence https://distill.pub/2017/aia/ 34 comments
- Synthesizer for thought | thesephist.com https://thesephist.com/posts/synth/ 10 comments
- Towards Monosemanticity: Decomposing Language Models With Dictionary Learning https://transformer-circuits.pub/2023/monosemantic-features/index.html 5 comments
- Toy Models of Superposition https://transformer-circuits.pub/2022/toy_model/index.html 4 comments
- https://notion.com 0 comments
- Circuits Updates - January 2024 https://transformer-circuits.pub/2024/jan-update/index.html 0 comments
- [2404.16014] Improving Dictionary Learning with Gated Sparse Autoencoders https://arxiv.org/abs/2404.16014 0 comments
Related searches:
Search whole site: site:thesephist.com
Search title: Prism: mapping interpretable concepts and features in a latent space of language | thesephist.com
See how to search.