Hacker News
- Maxtext: A simple, performant and scalable Jax LLM https://github.com/google/maxtext 7 comments
- Google’s maxtext – A simple, performant and scalable Jax LLM https://github.com/google/maxtext 2 comments
Linking pages
- Gemma: Google introduces new state-of-the-art open models https://blog.google/technology/developers/gemma-open-models/ 535 comments
- the world’s largest distributed LLM training job on TPU v5e | Google Cloud Blog https://cloud.google.com/blog/products/compute/the-worlds-largest-distributed-llm-training-job-on-tpu-v5e 50 comments
- GitHub - SalvatoreRa/ML-news-of-the-week: A collection of the the best ML and AI news every week (research, news, resources) https://github.com/SalvatoreRa/ML-news-of-the-week 8 comments
- GitHub - google/paxml: Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates. https://github.com/google/paxml 0 comments
- Using Cloud TPU Multislice to scale AI workloads | Google Cloud Blog https://cloud.google.com/blog/products/compute/using-cloud-tpu-multislice-to-scale-ai-workloads 0 comments
- Transformer inference tricks - by Finbarr Timbers https://www.artfintel.com/p/transformer-inference-tricks 0 comments
- What’s new with Google Cloud’s AI Hypercomputer architecture | Google Cloud Blog https://cloud.google.com/blog/products/compute/whats-new-with-google-clouds-ai-hypercomputer-architecture 0 comments
- Google open-sources tools to support AI model development | TechCrunch https://techcrunch.com/2024/04/09/google-open-sources-tools-to-support-ai-model-development/ 0 comments
- GitHub - google/JetStream: JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome). https://github.com/google/JetStream 0 comments
- Accelerating AI Inference with Google Cloud TPUs and GPUs | Google Cloud Blog https://cloud.google.com/blog/products/compute/accelerating-ai-inference-with-google-cloud-tpus-and-gpus 0 comments
Linked pages
- GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. https://github.com/karpathy/nanoGPT 366 comments
- GitHub - karpathy/minGPT: A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training https://github.com/karpathy/minGPT 133 comments
- the world’s largest distributed LLM training job on TPU v5e | Google Cloud Blog https://cloud.google.com/blog/products/compute/the-worlds-largest-distributed-llm-training-job-on-tpu-v5e 50 comments
- GitHub - google/paxml: Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates. https://github.com/google/paxml 0 comments
- Gemma - a family of lightweight, state-of-the art open models from Google. | Google AI for Developers https://ai.google.dev/gemma 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - google/maxtext: A simple, performant and scalable Jax LLM!
See how to search.