- Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1 15 comments nvidia
Linking pages
- Google’s New AI-Focused 'A3' Supercomputer Has 26,000 GPUs https://www.hpcwire.com/2023/05/10/googles-new-ai-focused-a3-supercomputer-has-26000-gpus/ 40 comments
- GitHub - mosaicml/llm-foundry https://github.com/mosaicml/llm-foundry 0 comments
- GitHub - NVIDIA/TransformerEngine: A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference. https://github.com/NVIDIA/TransformerEngine 0 comments
Related searches:
Search whole site: site:www.mosaicml.com
Search title: Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1)
See how to search.