Hacker News
Lobsters
Linking pages
- LLMs Know More Than What They Say - by Ruby Pai https://arjunbansal.substack.com/p/llms-know-more-than-what-they-say 18 comments
- GitHub - SalvatoreRa/ML-news-of-the-week: A collection of the the best ML and AI news every week (research, news, resources) https://github.com/SalvatoreRa/ML-news-of-the-week 8 comments
- An Intuitive Explanation of Sparse Autoencoders for Mechanistic Interpretability of LLMs | Adam Karvonen https://adamkarvonen.github.io/machine_learning/2024/06/11/sae-intuitions.html 0 comments
- "Mechanistic interpretability" for LLMs, explained https://seantrott.substack.com/p/mechanistic-interpretability-for 0 comments
- Should Developers Care about Interpretability? • Thariq Shihipar https://www.thariq.io/blog/interpretability/ 0 comments
Linked pages
- Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet https://transformer-circuits.pub/2024/scaling-monosemanticity/ 135 comments
- Claude https://claude.ai/ 48 comments
- Mapping the Mind of a Large Language Model \ Anthropic https://www.anthropic.com/research/mapping-mind-language-model 1 comment
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:www.anthropic.com
Search title: Golden Gate Claude \ Anthropic
See how to search.