Linking pages
Linked pages
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- [2105.01601] MLP-Mixer: An all-MLP Architecture for Vision https://arxiv.org/abs/2105.01601 59 comments
- [2105.08050] Pay Attention to MLPs https://arxiv.org/abs/2105.08050 25 comments
- ImageNet http://image-net.org/index 12 comments
- [2104.00298] EfficientNetV2: Smaller Models and Faster Training https://arxiv.org/abs/2104.00298 6 comments
- Open-Sourcing BiT: Exploring Large-Scale Pre-training for Computer Vision – Google AI Blog https://ai.googleblog.com/2020/05/open-sourcing-bit-exploring-large-scale.html 6 comments
- COCO - Common Objects in Context http://cocodataset.org 2 comments
- Transformers for Image Recognition at Scale – Google AI Blog https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html 1 comment
- [2106.04803] CoAtNet: Marrying Convolution and Attention for All Data Sizes https://arxiv.org/abs/2106.04803 1 comment
- [1707.02968] Revisiting Unreasonable Effectiveness of Data in Deep Learning Era https://arxiv.org/abs/1707.02968 0 comments
- EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling – Google AI Blog https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html 0 comments
- https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: A Multi-Axis Approach for Vision Transformer and MLP Models – Google AI Blog
See how to search.