- Google releases ELECTRA, a more efficient NLP model https://ai.googleblog.com/2020/03/more-efficient-nlp-model-pre-training.html 12 comments linguistics
Linking pages
- A Speech-To-Text Practitioner’s Criticisms of Industry and Academia https://thegradient.pub/a-speech-to-text-practitioners-criticisms-of-industry-and-academia/ 53 comments
- GitHub - vlgiitr/DL_Topics: List of DL topics and resources essential for cracking interviews https://github.com/vlgiitr/DL_Topics 1 comment
- Researchers propose using 'rare word' dictionaries to bolster unsupervised language model training | VentureBeat https://venturebeat.com/2020/08/13/researchers-propose-using-rare-word-dictionaries-to-bolster-unsupervised-language-model-training/ 0 comments
- Google Research: Looking Back at 2020, and Forward to 2021 – Google AI Blog https://ai.googleblog.com/2021/01/google-research-looking-back-at-2020.html 0 comments
- PEGASUS: A State-of-the-Art Model for Abstractive Text Summarization – Google AI Blog https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html 0 comments
- Google at ICLR 2020 – Google AI Blog https://ai.googleblog.com/2020/04/google-at-iclr-2020.html 0 comments
- Measuring Gendered Correlations in Pre-trained NLP Models – Google AI Blog https://ai.googleblog.com/2020/10/measuring-gendered-correlations-in-pre.html 0 comments
- GitHub - tomohideshibata/BERT-related-papers: BERT-related papers https://github.com/tomohideshibata/BERT-related-papers 0 comments
- Pre-training generalist agents using offline reinforcement learning – Google AI Blog https://ai.googleblog.com/2023/02/pre-training-generalist-agents-using.html 0 comments
Linked pages
- Sentiment analysis - Wikipedia https://en.wikipedia.org/wiki/Sentiment_analysis 311 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer – Google AI Blog https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html 66 comments
- [1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805 25 comments
- [1906.08237] XLNet: Generalized Autoregressive Pretraining for Language Understanding https://arxiv.org/abs/1906.08237 15 comments
- Floating-point arithmetic - Wikipedia https://en.wikipedia.org/wiki/Floating-point_arithmetic 5 comments
- The Stanford Question Answering Dataset https://rajpurkar.github.io/SQuAD-explorer/ 4 comments
- Generative adversarial network - Wikipedia https://en.wikipedia.org/wiki/Generative_adversarial_network 1 comment
- [1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer https://arxiv.org/abs/1910.10683 1 comment
- [1909.11942] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations https://arxiv.org/abs/1909.11942 0 comments
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators | OpenReview https://openreview.net/forum?id=r1xMH1BtvB 0 comments
- [1811.02549] Language GANs Falling Short https://arxiv.org/abs/1811.02549 0 comments
- GLUE Benchmark https://gluebenchmark.com/leaderboard/ 0 comments
- [1907.11692] RoBERTa: A Robustly Optimized BERT Pretraining Approach https://arxiv.org/abs/1907.11692 0 comments
- https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: More Efficient NLP Model Pre-training with ELECTRA – Google AI Blog
See how to search.