Linking pages
Linked pages
- Towards a Conversational Agent that Can Chat About…Anything – Google AI Blog https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html 152 comments
- Turing-NLG: A 17-billion-parameter language model by Microsoft - Microsoft Research https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ 139 comments
- Understanding searches better than ever before https://www.blog.google/products/search/search-language-understanding-bert/ 109 comments
- NLP's Clever Hans Moment has Arrived https://thegradient.pub/nlps-clever-hans-moment-has-arrived/ 89 comments
- Online speech recognition with wav2letter@anywhere https://ai.facebook.com/blog/online-speech-recognition-with-wav2letteranywhere/ 63 comments
- Reformer: The Efficient Transformer – Google AI Blog https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html 44 comments
- Locality-sensitive hashing - Wikipedia https://en.wikipedia.org/wiki/Locality-sensitive_hashing 40 comments
- [1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805 25 comments
- [1912.01412] Deep Learning for Symbolic Mathematics https://arxiv.org/abs/1912.01412 18 comments
- [2002.12327] A Primer in BERTology: What we know about how BERT works https://arxiv.org/abs/2002.12327 16 comments
- [1906.08237] XLNet: Generalized Autoregressive Pretraining for Language Understanding https://arxiv.org/abs/1906.08237 15 comments
- [1503.02531] Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531 5 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- GitHub - microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. https://github.com/microsoft/DeepSpeed 1 comment
- [1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer https://arxiv.org/abs/1910.10683 1 comment
- [2001.04451] Reformer: The Efficient Transformer https://arxiv.org/abs/2001.04451 0 comments
- c4 | TensorFlow Datasets https://www.tensorflow.org/datasets/catalog/c4 0 comments
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators | OpenReview https://openreview.net/forum?id=r1xMH1BtvB 0 comments
- Encode, Tag and Realize: A Controllable and Efficient Approach for Text Generation – Google AI Blog https://ai.googleblog.com/2020/01/encode-tag-and-realize-controllable-and.html 0 comments
- Applying AutoML to Transformer Architectures – Google AI Blog https://ai.googleblog.com/2019/06/applying-automl-to-transformer.html 0 comments
Related searches:
Search whole site: site:dair.ai
Search title: NLP Research Highlights — Issue #1 – DAIR.AI
See how to search.