Linking pages
- 2019 — Year of BERT and Transformer | by Manu Suryavansh | Towards Data Science https://towardsdatascience.com/2019-year-of-bert-and-transformer-f200b53d05b9?sk=77913662dd96ce5de77998341504902f&source=friends_link 0 comments
- Speeding up BERT. How to make BERT models faster | by Grigory Sapunov | Intento https://blog.inten.to/speeding-up-bert-5528e18bb4ea 0 comments
- 🏎 Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT | by Victor Sanh | HuggingFace | Medium https://medium.com/huggingface/distilbert-8cf3380435b5 0 comments
Linked pages
- [1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805 25 comments
- TensorFlow Lite | ML for Mobile and Edge Devices https://www.tensorflow.org/lite/ 22 comments
- GitHub - google-research/bert: TensorFlow code and pre-trained models for BERT https://github.com/google-research/bert 21 comments
- [1906.08237] XLNet: Generalized Autoregressive Pretraining for Language Understanding https://arxiv.org/abs/1906.08237 15 comments
- [1506.02626] Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626 6 comments
- [1902.09574] The State of Sparsity in Deep Neural Networks https://arxiv.org/abs/1902.09574 3 comments
- [1907.12412] ERNIE 2.0: A Continual Pre-training Framework for Language Understanding https://arxiv.org/abs/1907.12412 0 comments
- [1905.10650] Are Sixteen Heads Really Better than One? https://arxiv.org/abs/1905.10650 0 comments
- [1412.6550] FitNets: Hints for Thin Deep Nets http://arxiv.org/abs/1412.6550 0 comments
Related searches:
Search whole site: site:blog.rasa.com
Search title: Learn how to make BERT smaller and faster | The Rasa Blog | Rasa
See how to search.