Hacker News
- Feature Visualization: How neural nets build up their understanding of images https://distill.pub/2017/feature-visualization/ 63 comments
- Feature Visualization: How neural networks build up their understanding of images https://distill.pub/2017/feature-visualization/ 4 comments learnmachinelearning
- Feature Visualization: How neural networks build up their understanding of images https://distill.pub/2017/feature-visualization/ 10 comments programming
Linking pages
- Multimodal Neurons in Artificial Neural Networks https://openai.com/blog/multimodal-neurons/ 93 comments
- Training Computer Vision Models on Random Noise Instead of Real Images - Unite.AI https://www.unite.ai/training-computer-vision-models-on-random-noise-instead-of-real-images/ 64 comments
- Anthropic \ Decomposing Language Models Into Understandable Components https://www.anthropic.com/index/decomposing-language-models-into-understandable-components 62 comments
- Interpretability in Machine Learning: An Overview https://thegradient.pub/interpretability-in-ml-a-broad-overview/ 47 comments
- NLP's ImageNet moment has arrived https://thegradient.pub/nlp-imagenet/ 42 comments
- Gaussian Distributions are Soap Bubbles http://www.inference.vc/high-dimensional-gaussian-distributions-are-soap-bubble/ 33 comments
- No shortcuts to knowledge – Late & Variable – Learning to write programs that learn to write programs https://deoxyribose.github.io/No-Shortcuts-to-Knowledge/ 22 comments
- ML Resources https://sgfin.github.io/learning-resources/ 21 comments
- Keras vs. PyTorch: Alien vs. Predator recognition with transfer learning - deepsense.ai https://deepsense.ai/keras-vs-pytorch-avp-transfer-learning/ 11 comments
- What is AI interpretability? Artificial intelligence researchers are reverse-engineering ChatGPT, Claude, and Gemini. - Vox https://www.vox.com/future-perfect/362759/ai-interpretability-openai-claude-gemini-neuroscience 7 comments
- The Google Brain Team — Looking Back on 2017 (Part 1 of 2) – Google AI Blog https://research.googleblog.com/2018/01/the-google-brain-team-looking-back-on.html 6 comments
- Can A.I. Be Taught to Explain Itself? - The New York Times https://mobile.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html 2 comments
- Introducing Activation Atlases https://openai.com/blog/introducing-activation-atlases/ 2 comments
- Inception loops discover what excites neurons most using deep predictive models | Nature Neuroscience https://www.nature.com/articles/s41593-019-0517-x 1 comment
- Model Interpretability and Visualization: Looking Inside the Neural Network Black Box | by Avinash | Heartbeat https://heartbeat.fritz.ai/model-interpretability-and-visualization-looking-inside-the-neural-network-black-box-79f45f2cb771 0 comments
- Robustness Beyond Security: Representation Learning – gradient science http://gradientscience.org/robust_reps/ 0 comments
- Starting deep learning hands-on: image classification on CIFAR-10 - deepsense.ai https://blog.deepsense.ai/deep-learning-hands-on-image-classification/ 0 comments
- Explainable AI May Surrender Confidential Data More Easily - Unite.AI https://www.unite.ai/explainable-ai-may-surrender-confidential-data-more-easily/ 0 comments
- GitHub - r0f1/datascience: Curated list of Python resources for data science. https://github.com/r0f1/datascience 0 comments
- OpenAI Puts CV Models Under Their Microscope | by Synced | SyncedReview | Medium https://medium.com/syncedreview/openai-puts-cv-models-under-their-microscope-95234ae75244 0 comments
Related searches:
Search whole site: site:distill.pub
Search title: Feature Visualization
See how to search.