- [P] Ecco - Language model analysis and visualization toolkit https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens 4 comments machinelearning
Linking pages
- GPT-3 Creative Fiction · Gwern.net https://www.gwern.net/GPT-3#prompts-as-programming 141 comments
- Finding the Words to Say: Hidden State Visualizations for Language Models – Jay Alammar – Visualizing machine learning one concept at a time. https://jalammar.github.io/hidden-states/ 4 comments
- GitHub - AlignmentResearch/tuned-lens: Tools for understanding how transformer predictions are built layer-by-layer https://github.com/AlignmentResearch/tuned-lens 1 comment
- GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources. https://github.com/JShollaj/awesome-llm-interpretability 1 comment
- GitHub - jalammar/ecco: Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0). https://github.com/jalammar/ecco 0 comments
- Interfaces for Explaining Transformer Language Models – Jay Alammar – Visualizing machine learning one concept at a time. https://jalammar.github.io/explaining-transformers/ 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:lesswrong.com
Search title: interpreting GPT: the logit lens — LessWrong
See how to search.