- Google has released a new model for Machine Attention - how far away are we still from capturing the power and flexibility of human/animal attention? https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html?m=1 4 comments cogsci
Linked pages
- Towards a Conversational Agent that Can Chat About…Anything – Google AI Blog https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html 152 comments
- Graph theory - Wikipedia http://en.wikipedia.org/wiki/Graph_theory 103 comments
- Image GPT https://openai.com/blog/image-gpt/ 84 comments
- Reformer: The Efficient Transformer – Google AI Blog https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html 44 comments
- [1812.08434] Graph Neural Networks: A Review of Methods and Applications https://arxiv.org/abs/1812.08434 15 comments
- Generative Modeling with Sparse Transformers https://openai.com/blog/sparse-transformer/ 9 comments
- Transformer-XL: Unleashing the Potential of Attention Models – Google AI Blog https://ai.googleblog.com/2019/01/transformer-xl-unleashing-potential-of.html 7 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- https://www.biorxiv.org/content/10.1101/622803v3 1 comment
- [2007.14062] Big Bird: Transformers for Longer Sequences https://arxiv.org/abs/2007.14062 0 comments
- Attention Attention https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#born-for-translation 0 comments
- [1312.3005] One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling http://arxiv.org/abs/1312.3005 0 comments
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing – Google AI Blog https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html 0 comments
- https://arxiv.org/abs/1906.02243 0 comments
- Music Transformer: Generating Music with Long-Term Structure https://magenta.tensorflow.org/music-transformer 0 comments
- [2009.06732] Efficient Transformers: A Survey https://arxiv.org/abs/2009.06732 0 comments
- [2006.04768] Linformer: Self-Attention with Linear Complexity https://arxiv.org/abs/2006.04768 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: Rethinking Attention with Performers – Google AI Blog
See how to search.