Linking pages
- Why transformative artificial intelligence is really, really hard to achieve https://thegradient.pub/why-transformative-artificial-intelligence-is-really-really-hard-to-achieve/ 63 comments
- 〰️The great computing stagnation https://www.exponentialview.co/p/the-great-computing-stagnation 58 comments
- The Inference Cost Of Search Disruption – Large Language Model Cost Analysis https://www.semianalysis.com/p/the-inference-cost-of-search-disruption 47 comments
- GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE https://www.semianalysis.com/p/gpt-4-architecture-infrastructure 10 comments
- The Coming Wave of AI, and How Nvidia Dominants https://www.fabricatedknowledge.com/p/the-coming-wave-of-ai-and-how-nvidia 3 comments
- MAD, China, and the Semiconductor Showdown (Part 2) https://eastwind.substack.com/p/mad-china-and-the-semiconductor-showdown-2ba 2 comments
- Tesla AI Capacity Expansion – H100, Dojo D1, D2, HW 4.0, X.AI, Cloud Service Provider https://www.semianalysis.com/p/tesla-ai-capacity-expansion-h100 2 comments
- 🔥 Your guide to AI: February 2023 - Guide to AI https://nathanbenaich.substack.com/p/your-guide-to-ai-february-2023 1 comment
- Google AI Infrastructure Supremacy: Systems Matter More Than Microarchitecture https://www.semianalysis.com/p/google-ai-infrastructure-supremacy 1 comment
- Google AI Infrastructure Supremacy: Systems Matter More Than Microarchitecture https://www.semianalysis.com/p/google-ai-infrastructure-supremacy?post_id=114314781&publication_id=329241 1 comment
- Diminishing Returns in Machine Learning Part 1 https://www.fromthenew.world/p/diminishing-returns-in-machine-learning 0 comments
Linked pages
- How Nvidia’s CUDA Monopoly In Machine Learning Is Breaking - OpenAI Triton And PyTorch 2.0 https://www.semianalysis.com/p/nvidiaopenaitritonpytorch 112 comments
- Mosaic LLMs (Part 2): GPT-3 quality for <$500k https://www.mosaicml.com/blog/gpt-3-quality-for-500k 7 comments
- [2203.15556] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 0 comments
- [2204.02311] PaLM: Scaling Language Modeling with Pathways https://arxiv.org/abs/2204.02311 0 comments
- Davis Summarizes Papers | Davis Blalock | Substack https://dblalock.substack.com/ 0 comments
Related searches:
Search whole site: site:semianalysis.com
Search title: The AI Brick Wall – A Practical Limit For Scaling Dense Transformer Models, and How GPT 4 Will Break Past It
See how to search.