Hacker News
Linking pages
- $2 H100s: How the GPU Bubble Burst - by Eugene Cheah https://www.latent.space/p/gpu-bubble 289 comments
- We Are Running Out of Low-Background Tokens (Nov 2023 Recap) https://www.latent.space/i/139368545/the-concept-of-low-background-tokens 6 comments
- The Accidental AI Canvas - with Steve Ruiz of tldraw https://www.latent.space/p/tldraw 2 comments
- Notebooks = Chat++ and RAG = RecSys! — with Bryan Bischof of Hex Magic https://www.latent.space/i/139219943/rag-recsys 0 comments
- The AI Stack for the 3rd Epoch of Computing https://spillai.substack.com/p/the-ai-stack-for-the-3rd-epoch-of 0 comments
- The Busy Person's Intro to Finetuning & Open Source AI - Wing Lian, Axolotl https://www.latent.space/p/axolotl 0 comments
- The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph https://www.latent.space/p/sourcegraph 0 comments
- The AI-First Graphics Editor - with Suhail Doshi of Playground AI https://www.latent.space/p/suhail-doshi 0 comments
- NeurIPS 2023 Recap — Best Papers - by swyx - Latent Space https://www.latent.space/p/neurips-2023-papers 0 comments
- AI Magic: Shipping 1000s of successful products with no managers and a team of 12 — Jeremy Howard of Answer.ai https://www.latent.space/p/answerai 0 comments
Linked pages
- tinygrad: A simple and powerful neural network framework https://tinygrad.org/ 143 comments
- Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini 113 comments
- How Nvidia’s CUDA Monopoly In Machine Learning Is Breaking - OpenAI Triton And PyTorch 2.0 https://www.semianalysis.com/p/nvidiaopenaitritonpytorch 112 comments
- RWKV: Reinventing RNNs for the Transformer Era — with Eugene Cheah of UIlicious https://www.latent.space/p/rwkv#%C2%A7the-eleuther-mafia 66 comments
- Homepage | Cerebras https://www.cerebras.net/ 53 comments
- Training Cluster as a service: Train your LLM at scale on our infrastructure https://huggingface.co/training-cluster 45 comments
- Modular: AI development starts here https://www.modular.com/ 39 comments
- MatX | Faster chips for LLMs https://matx.com 24 comments
- The End of Finetuning — with Jeremy Howard of Fast.ai https://www.latent.space/p/fastai#details 5 comments
- Microsoft Reveals Custom 128-Core Arm Datacenter CPU, Massive Maia 100 GPU Designed for AI | Tom's Hardware https://www.tomshardware.com/news/microsoft-azure-maia-ai-accelerator-cobalt-cpu-custom 4 comments
- AMD MI300 – Taming The Hype – AI Performance, Volume Ramp, Customers, Cost, IO, Networking, Software https://www.semianalysis.com/p/amd-mi300-taming-the-hype-ai-performance 2 comments
- FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI https://www.latent.space/p/flashattention 0 comments
- TPUv5e: The New Benchmark in Cost-Efficient Inference and Training for <200B Parameter Models https://www.semianalysis.com/p/tpuv5e-the-new-benchmark-in-cost 0 comments
Related searches:
Search whole site: site:latent.space
Search title: The State of Silicon and the GPU Poors - with Dylan Patel of SemiAnalysis
See how to search.