Hacker News
- Large language models, explained with a minimum of math and jargon https://www.understandingai.org/p/large-language-models-explained-with 6 comments
Linking pages
- Top AIs still fail IQ tests - by Maxim Lott - Maximum Truth https://www.maximumtruth.org/p/top-ais-still-fail-iq-tests 118 comments
- We're All Wrong About AI - by Arnold Kling - In My Tribe https://arnoldkling.substack.com/p/were-all-wrong-about-ai 88 comments
- Google abandoned "don't be evil" — and Gemini is the result https://www.natesilver.net/p/google-abandoned-dont-be-evil-and 84 comments
- Google abandoned "don't be evil" — and Gemini is the result https://www.natesilver.net/p/google-abandoned-dont-be-evil-and?triedRedirect=true 79 comments
- The real research behind the wild rumors about OpenAI’s Q* project | Ars Technica https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project/ 44 comments
- AIs ranked by IQ; AI passes 100 IQ for first time, with release of Claude-3 https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq 39 comments
- How to think about the OpenAI Q* rumors - by Timothy B Lee https://www.understandingai.org/p/how-to-think-about-the-openai-q-rumors 21 comments
- Enhancing Self-Therapy with Internal Family Systems and AI - https://allbein.gs/ifs/ 1 comment
- The real research behind the wild rumors about OpenAI’s Q* project | Ars Technica https://arstechnica.com/?p=1989517 1 comment
- #18: A slice of great things you missed this year https://www.scientificdiscovery.dev/p/18-a-slice-of-great-things-you-missed 0 comments
- "Mechanistic interpretability" for LLMs, explained https://seantrott.substack.com/p/mechanistic-interpretability-for 0 comments
- OpenAI just unleashed an alien of extraordinary ability https://www.understandingai.org/p/openai-just-unleashed-an-alien-of 0 comments
- How transformer-based networks are improving self-driving software https://www.understandingai.org/p/how-transformer-based-networks-are 0 comments
- Why large language models struggle with long contexts https://www.understandingai.org/p/why-large-language-models-struggle 0 comments
Linked pages
- [2303.12712] Sparks of Artificial General Intelligence: Early experiments with GPT-4 https://arxiv.org/abs/2303.12712 911 comments
- [2302.02083] Theory of Mind May Have Spontaneously Emerged in Large Language Models https://arxiv.org/abs/2302.02083 375 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- https://dl.acm.org/doi/pdf/10.1145/3442188.3445922 122 comments
- How computers got shockingly good at recognizing images | Ars Technica https://arstechnica.com/science/2018/12/how-computers-got-shockingly-good-at-recognizing-images/ 30 comments
- Matrix multiplication - Wikipedia https://en.wikipedia.org/wiki/Matrix_multiplication#Outer_product 4 comments
- [2001.08361] Scaling Laws for Neural Language Models https://arxiv.org/abs/2001.08361 0 comments
- [1301.3781] Efficient Estimation of Word Representations in Vector Space https://arxiv.org/abs/1301.3781 0 comments
- [2012.14913] Transformer Feed-Forward Layers Are Key-Value Memories https://arxiv.org/abs/2012.14913 0 comments
- [2302.08399] Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks https://arxiv.org/abs/2302.08399 0 comments
Related searches:
Search whole site: site:www.understandingai.org
Search title: Large language models, explained with a minimum of math and jargon
See how to search.