Linking pages
- LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything – Google AI Blog https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html 11 comments
- GitHub - eugeneyan/applied-ml: 📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production. https://github.com/eugeneyan/applied-ml 2 comments
- Google Research: Looking Back at 2020, and Forward to 2021 – Google AI Blog https://ai.googleblog.com/2021/01/google-research-looking-back-at-2020.html 0 comments
- GPT-3: We’re at the very beginning of a new app ecosystem | VentureBeat https://venturebeat.com/2021/02/27/gpt-3-were-at-the-very-beginning-of-a-new-app-ecosystem/ 0 comments
- Introducing FELIX: Flexible Text Editing Through Tagging and Insertion – Google AI Blog https://ai.googleblog.com/2021/05/introducing-felix-flexible-text-editing.html 0 comments
- Google at ICML 2020 – Google AI Blog https://ai.googleblog.com/2020/07/google-at-icml-2020.html 0 comments
- Auto-generated Summaries in Google Docs – Google AI Blog https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html 0 comments
- Using Variational Transformer Networks to Automate Document Layout Design – Google AI Blog https://ai.googleblog.com/2021/06/using-variational-transformer-networks.html 0 comments
- Conversation Summaries in Google Chat – Google AI Blog https://ai.googleblog.com/2022/11/conversation-summaries-in-google-chat.html 0 comments
Linked pages
- Better Language Models and Their Implications https://openai.com/blog/better-language-models/ 99 comments
- Turing test - Wikipedia http://en.wikipedia.org/wiki/Turing_test 80 comments
- Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer – Google AI Blog https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html 66 comments
- ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations – Google AI Blog https://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html 45 comments
- [1906.08237] XLNet: Generalized Autoregressive Pretraining for Language Understanding https://arxiv.org/abs/1906.08237 15 comments
- More Efficient NLP Model Pre-training with ELECTRA – Google AI Blog https://ai.googleblog.com/2020/03/more-efficient-nlp-model-pre-training.html 12 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing – Google AI Blog https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html 0 comments
- Text summarization with TensorFlow – Google AI Blog https://ai.googleblog.com/2016/08/text-summarization-with-tensorflow.html 0 comments
- [1907.11692] RoBERTa: A Robustly Optimized BERT Pretraining Approach https://arxiv.org/abs/1907.11692 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: PEGASUS: A State-of-the-Art Model for Abstractive Text Summarization – Google AI Blog
See how to search.