Linking pages
- GitHub - janhq/awesome-local-ai: An awesome repository of local AI tools https://github.com/janhq/awesome-local-ai 3 comments
- Alpa: Automated Model-Parallel Deep Learning – Google AI Blog https://ai.googleblog.com/2022/05/alpa-automated-model-parallel-deep.html 1 comment
- awesome-marketing-datascience/awesome-ai.md at master · underlines/awesome-marketing-datascience · GitHub https://github.com/underlines/awesome-marketing-datascience/blob/master/awesome-ai.md 1 comment
- Alpa: Automated Model-Parallel Deep Learning – Google AI Blog https://ai.googleblog.com/2022/05/alpa-automated-model-parallel-deep.html?m=1 0 comments
- Google AI Introduces A Method For Automating Inter- And Intra-Operator Parallelism For Distributed Deep Learning - MarkTechPost https://www.marktechpost.com/2022/05/08/google-ai-introduces-a-method-for-automating-inter-and-intra-operator-parallelism-for-distributed-deep-learning/ 0 comments
- GitHub - BlackSamorez/tensor_parallel: Automatically split your PyTorch models on multiple GPUs for training & inference https://github.com/BlackSamorez/tensor_parallel 0 comments
- GitHub - Hannibal046/Awesome-LLM: Awesome-LLM: a curated list of Large Language Model https://github.com/Hannibal046/Awesome-LLM 0 comments
- A Brief Overview of Parallelism Strategies in Deep Learning | Alex McKinney https://afmck.in/posts/2023-02-26-parallelism/ 0 comments
- GitHub - AIoT-MLSys-Lab/Efficient-LLMs-Survey: Efficient Large Language Models: A Survey https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey 0 comments
Linked pages
- GitHub - google/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more https://github.com/google/jax 99 comments
- Serving OPT-175B Language Model with Alpa https://opt.alpa.ai 2 comments
- Alpa: Automated Model-Parallel Deep Learning – Google AI Blog https://ai.googleblog.com/2022/05/alpa-automated-model-parallel-deep.html 1 comment
- GitHub - ray-project/ray: Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. https://github.com/ray-project/ray 0 comments
- XLA: Optimizing Compiler for Machine Learning | TensorFlow https://www.tensorflow.org/xla 0 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - alpa-projects/alpa: Training and serving large-scale neural networks
See how to search.