Linking pages
- Why transformative artificial intelligence is really, really hard to achieve https://thegradient.pub/why-transformative-artificial-intelligence-is-really-really-hard-to-achieve/ 63 comments
- Plentiful, high-paying jobs in the age of AI https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the 45 comments
- Economists vs. EAs 2 - by Brian Chau - From the New World https://www.fromthenew.world/p/economists-vs-eas-2 0 comments
- AI and the Structure of Reasoning | Reaction Wheel https://reactionwheel.net/2023/08/ai-and-the-structure-of-reasoning.html 0 comments
Linked pages
- GPT-4 https://openai.com/research/gpt-4 5744 comments
- Planning for AGI and beyond https://openai.com/blog/planning-for-agi-and-beyond/ 210 comments
- The Best GPUs for Deep Learning in 2023 — An In-depth Analysis https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/ 145 comments
- ChatGPT sets record for fastest-growing user base - analyst note | Reuters https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ 124 comments
- Moore's law - Wikipedia https://en.wikipedia.org/wiki/Moore%27s_law 67 comments
- GPU Benchmarks for Deep Learning | Lambda https://lambdalabs.com/gpu-benchmarks 23 comments
- Nvidia Introduces cuDNN, a CUDA-based library for Deep Neural Networks http://www.infoq.com/news/2014/09/cudnn 20 comments
- Sigmoid function - Wikipedia https://en.wikipedia.org/wiki/Sigmoid_function 6 comments
- Tensor Processing Unit - Wikipedia https://en.wikipedia.org/wiki/Tensor_processing_unit 5 comments
- Best GPU for Deep Learning in 2022 (so far) https://lambdalabs.com/blog/best-gpu-2022-sofar/ 3 comments
- CUDA - Wikipedia http://en.wikipedia.org/wiki/CUDA 3 comments
- Before the flood - by Samuel Hammond - Second Best https://www.secondbest.ca/p/before-the-flood 1 comment
- Google AI Infrastructure Supremacy: Systems Matter More Than Microarchitecture https://www.semianalysis.com/p/google-ai-infrastructure-supremacy 1 comment
- Google says its custom machine learning chips are often 15-30x faster than GPUs and CPUs | TechCrunch https://techcrunch.com/2017/04/05/google-says-its-custom-machine-learning-chips-are-often-15-30x-faster-than-gpus-and-cpus/ 0 comments
- The AI Brick Wall – A Practical Limit For Scaling Dense Transformer Models, and How GPT 4 Will Break Past It https://www.semianalysis.com/p/the-ai-brick-wall-a-practical-limit 0 comments
- Introducing Triton: Open-source GPU programming for neural networks https://openai.com/research/triton 0 comments
- Techniques for training large neural networks https://openai.com/research/techniques-for-training-large-neural-networks 0 comments
Related searches:
Search whole site: site:fromthenew.world
Search title: Diminishing Returns in Machine Learning Part 1
See how to search.