Hacker News
- Pathways Language Model (PaLM): Scaling to 540B parameters https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html 207 comments
- Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html 4 comments programming
- [R] Google's 540B (Dense) model Pathways LLM, "Unlocks" new tasks proportional to scale https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html 54 comments machinelearning
- Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html 13 comments futurology
Linking pages
- No Joke: Google's AI Is Smart Enough to Understand Your Humor - CNET https://www.cnet.com/tech/services-and-software/no-joke-googles-ai-is-smart-enough-to-understand-your-humor/ 479 comments
- All Roads Lead to Rome: The Machine Learning Job Market in 2022 | Eric Jang https://evjang.com/2022/04/25/rome.html 259 comments
- New AI features and tools for Google Workspace, Cloud and developers https://blog.google/technology/ai/ai-developers-google-cloud-workspace/ 241 comments
- AI Canon | Andreessen Horowitz https://a16z.com/2023/05/25/ai-canon/ 219 comments
- Will AI target your job next? - by Ash Jafari - AI Future https://aifuture.substack.com/p/will-ai-target-your-job-next 155 comments
- Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes – Google Research Blog https://blog.research.google/2023/09/distilling-step-by-step-outperforming.html 123 comments
- Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini 113 comments
- Minerva: Solving Quantitative Reasoning Problems with Language Models – Google AI Blog http://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html 103 comments
- Google's AI robot uses language models to write code | Popular Science https://www.popsci.com/technology/google-ai-robot-code-as-policies 82 comments
- We're told AI neural networks 'learn' the way humans do. A neuroscientist explains why that's not the case https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993 79 comments
- There's now an open source alternative to ChatGPT, but good luck running it • TechCrunch https://techcrunch.com/2022/12/30/theres-now-an-open-source-alternative-to-chatgpt-but-good-luck-running-it/ 68 comments
- Characterizing Emergent Phenomena in Large Language Models – Google AI Blog https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html 57 comments
- Google's Med-PaLM 2 Shows Promise in Revolutionizing Healthcare with AI https://myelectricsparks.com/google-med-palm-2-revolutionizing-healthcare-ai/ 50 comments
- BLOOM Is the Most Important AI Model of the Decade https://thealgorithmicbridge.substack.com/p/bloom-is-the-most-important-ai-model 47 comments
- GitHub - lucidrains/x-transformers: A simple but complete full-attention transformer with a set of promising experimental features from various papers https://github.com/lucidrains/x-transformers 40 comments
- TPU v4 enables performance, energy and CO2e efficiency gains | Google Cloud Blog https://cloud.google.com/blog/topics/systems/tpu-v4-enables-performance-energy-and-co2e-efficiency-gains 38 comments
- MaMMUT: A simple vision-encoder text-decoder architecture for multimodal tasks – Google AI Blog https://ai.googleblog.com/2023/05/mammut-simple-vision-encoder-text.html 33 comments
- Meta unveils a new large language model that can run on a single GPU [Updated] | Ars Technica https://arstechnica.com/information-technology/2023/02/chatgpt-on-your-pc-meta-unveils-new-ai-model-that-can-run-on-a-single-gpu/ 28 comments
- Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge https://www.theverge.com/2023/3/31/23664426/google-bard-ai-chatbot-upgrades-coming-soon-sundar-pichai 22 comments
- GPT-4 Is Coming Soon. Here’s What We Know About It | by Alberto Romero | Towards Data Science https://towardsdatascience.com/gpt-4-is-coming-soon-heres-what-we-know-about-it-64db058cfd45?gi=6b5ffbdc901c 20 comments
Linked pages
- C (programming language) - Wikipedia https://en.wikipedia.org/wiki/C_(programming_language)#History 395 comments
- Competitive programming with AlphaCode https://www.deepmind.com/blog/competitive-programming-with-alphacode 295 comments
- [2005.14165] Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 201 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- Introducing Pathways: A next-generation AI architecture https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ 33 comments
- [2108.07732] Program Synthesis with Large Language Models https://arxiv.org/abs/2108.07732 25 comments
- LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything – Google AI Blog https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html 11 comments
- Python (programming language) - Wikipedia https://en.m.wikipedia.org/wiki/Python_(programming_language) 9 comments
- [2107.03374] Evaluating Large Language Models Trained on Code https://arxiv.org/abs/2107.03374 8 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- [2201.11903] Chain of Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/abs/2201.11903 1 comment
- [2112.06905] GLaM: Efficient Scaling of Language Models with Mixture-of-Experts https://arxiv.org/abs/2112.06905 1 comment
- [2203.12533] Pathways: Asynchronous Distributed Dataflow for ML https://arxiv.org/abs/2203.12533 1 comment
- GitHub - facebookresearch/TransCoder: Public release of the TransCoder research project https://arxiv.org/pdf/2006.03511.pdf https://github.com/facebookresearch/TransCoder/ 1 comment
- [2203.15556] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 0 comments
- [2001.08361] Scaling Laws for Neural Language Models https://arxiv.org/abs/2001.08361 0 comments
- [2106.06600] Break-It-Fix-It: Unsupervised Learning for Program Repair https://arxiv.org/abs/2106.06600 0 comments
- GitHub - google/BIG-bench: Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models https://github.com/google/BIG-bench 0 comments
- Google Cloud Model Cards https://modelcards.withgoogle.com/about 0 comments
- Data parallelism - Wikipedia https://en.wikipedia.org/wiki/Data_parallelism 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:ai.googleblog.com
Search title: Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance – Google AI Blog
See how to search.