Hacker News
Linked pages
- FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention | PyTorch https://pytorch.org/blog/flexattention/ 24 comments
- GitHub - pytorch-labs/gpt-fast: Simple and efficient pytorch-native transformer text generation in <1000 LOC of python. https://github.com/pytorch-labs/gpt-fast 1 comment
- GitHub - pytorch/torchtune: PyTorch native finetuning library https://github.com/pytorch/torchtune 0 comments
- GitHub - pytorch/torchtitan: A native PyTorch Library for large model training https://github.com/pytorch/torchtitan 0 comments
Related searches:
Search whole site: site:blog.ezyang.com
Search title: Ways to use torch.compile : ezyang’s blog
See how to search.