Linking pages
- Google Research: Themes from 2021 and Beyond – Google AI Blog https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html 52 comments
- Good News About the Carbon Footprint of Machine Learning Training – Google AI Blog https://ai.googleblog.com/2022/02/good-news-about-carbon-footprint-of.html 0 comments
Linked pages
- [2005.14165] Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 201 comments
- Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer – Google AI Blog https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html 66 comments
- Introducing TF-Coder, a tool that writes tricky TensorFlow expressions for you! — The TensorFlow Blog https://blog.tensorflow.org/2020/08/introducing-tensorflow-coder-tool.html 35 comments
- Turing completeness - Wikipedia https://en.wikipedia.org/wiki/Turing_completeness 16 comments
- Graph (discrete mathematics) - Wikipedia https://en.wikipedia.org/wiki/Graph_(discrete_mathematics) 15 comments
- Universal approximation theorem - Wikipedia https://en.wikipedia.org/wiki/Universal_approximation_theorem 4 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- Single instruction, multiple data - Wikipedia https://en.wikipedia.org/wiki/SIMD 3 comments
- Automatic summarization - Wikipedia https://en.wikipedia.org/wiki/Automatic_summarization 2 comments
- [2011.04006] Long Range Arena: A Benchmark for Efficient Transformers https://arxiv.org/abs/2011.04006 1 comment
- [2007.14062] Big Bird: Transformers for Longer Sequences https://arxiv.org/abs/2007.14062 0 comments
- Attention Attention https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#born-for-translation 0 comments
- Progress and Challenges in Long-Form Open-Domain Question Answering – Google AI Blog https://ai.googleblog.com/2021/03/progress-and-challenges-in-long-form.html 0 comments
- Exploring Massively Multilingual, Massive Neural Machine Translation – Google AI Blog https://ai.googleblog.com/2019/10/exploring-massively-multilingual.html 0 comments
- [1901.03429] On the Turing Completeness of Modern Neural Network Architectures https://arxiv.org/abs/1901.03429 0 comments
- Watts–Strogatz model - Wikipedia https://en.wikipedia.org/wiki/Watts%E2%80%93Strogatz_model 0 comments
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing – Google AI Blog https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html 0 comments
- [1907.11692] RoBERTa: A Robustly Optimized BERT Pretraining Approach https://arxiv.org/abs/1907.11692 0 comments
- [1809.09600] HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering https://arxiv.org/abs/1809.09600 0 comments
- Complete graph - Wikipedia https://en.wikipedia.org/wiki/Complete_graph 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: Constructing Transformers For Longer Sequences with Sparse Attention Methods – Google AI Blog
See how to search.