- [R] How to make CUDA libraries more performant? https://github.com/openai/triton 4 comments machinelearning
Linking pages
- PyTorch 2.0 | PyTorch https://pytorch.org/get-started/pytorch-2.0/ 153 comments
- Introducing Triton: Open-Source GPU Programming for Neural Networks https://openai.com/blog/triton/ 116 comments
- How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog https://siboehm.com/articles/22/CUDA-MMM 49 comments
- GitHub - mgaudet/CompilerJobs: A listing of compiler, language and runtime teams for people looking for jobs in this area https://github.com/mgaudet/CompilerJobs 10 comments
- Specialized Hardware for AI: Rethinking Assumptions and Implications for the Future - Gradient Flow https://gradientflow.com/ai-hardware-2023/ 1 comment
- GitHub - Lightning-AI/lightning-thunder: Source to source compiler for PyTorch. It makes PyTorch programs faster on single accelerators and distributed. https://github.com/Lightning-AI/lightning-thunder 1 comment
- GitHub - ridgerchu/matmulfreellm: Implementation for MatMul-free LM. https://github.com/ridgerchu/matmulfreellm 1 comment
- GitHub - ml-tooling/best-of-ml-python: 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly. https://github.com/ml-tooling/best-of-ml-python 0 comments
- GitHub - merrymercy/awesome-tensor-compilers: A list of awesome compiler projects and papers for tensor computation and deep learning. https://github.com/merrymercy/awesome-tensor-compilers 0 comments
- OpenAI releases Triton, a programming language for AI workload optimization | VentureBeat https://venturebeat.com/2021/07/28/openai-releases-triton-a-programming-language-for-ai-workload-optimization/ 0 comments
- GitHub - kayvr/token-hawk: WebGPU LLM inference tuned by hand https://github.com/kayvr/token-hawk 0 comments
- Phased Array Microphone | Ben Wang’s Blog https://benwang.dev/2023/02/26/Phased-Array-Microphone.html 0 comments
- GitHub - srush/Triton-Puzzles https://github.com/srush/Triton-Puzzles 0 comments
- Accelerating MoE model inference with Locality-Aware Kernel Design | PyTorch https://pytorch.org/blog/accelerating-moe-model/ 0 comments
- GitHub - BobMcDear/attorch: A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton. https://github.com/BobMcDear/attorch 0 comments
- GitHub - sustcsonglin/flash-linear-attention: Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton https://github.com/sustcsonglin/flash-linear-attention 0 comments
Linked pages
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:github.com
Search title: GitHub - openai/triton: Development repository for the Triton language and compiler
See how to search.