- Breaking the barrier: How to run GPT2-XL(float32) on colab https://github.com/max-ng/recurser 2 comments deeplearning
- Breaking the VRAM barrier: How to run GPT2-XL in colab https://github.com/max-ng/recurser 4 comments learnmachinelearning
Linked pages
- GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs. https://github.com/karpathy/nanoGPT 366 comments
- GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. https://github.com/huggingface/transformers 26 comments
Related searches:
Search whole site: site:github.com
Search title: GitHub - max-ng/recurser: Reduce VRAM usage on transformer models
See how to search.