Linking pages
- MaMMUT: A simple vision-encoder text-decoder architecture for multimodal tasks – Google AI Blog https://ai.googleblog.com/2023/05/mammut-simple-vision-encoder-text.html 33 comments
- Word embeddings | Text | TensorFlow https://www.tensorflow.org/tutorials/text/word_embeddings 5 comments
- GitHub - mzguntalan/h-former: H-Former is a VAE for generating in-between fonts (or combining fonts). Its encoder uses a Point net and transformer to compute a code vector of glyph. Its decoder is composed of multiple independent decoders which act on a code vector to reconstruct a point cloud representing a glpyh. https://github.com/mzguntalan/h-former 4 comments
- A comprehensive guide to learning LLMs (Foundational Models) https://www.linkedin.com/pulse/comprehensive-guide-learning-llms-foundational-models-yeddula/ 1 comment
- GitHub - markusaksli/ai-music: A vanilla Trasformer Decoder music generation model trained on Final Fantasy OST MIDI songs https://github.com/markusaksli/ai-music 0 comments
- Aman's AI Journal • Primers • Transformers https://aman.ai/primers/ai/transformers/ 0 comments
Linked pages
- TensorFlow http://tensorflow.org/ 440 comments
- Competitive programming with AlphaCode https://www.deepmind.com/blog/competitive-programming-with-alphacode 295 comments
- Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance – Google AI Blog https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html 279 comments
- MUM: A new AI milestone for understanding information https://blog.google/products/search/introducing-mum/ 208 comments
- [1706.03762] Attention Is All You Need https://arxiv.org/abs/1706.03762 145 comments
- Reformer: The Efficient Transformer – Google AI Blog https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html 44 comments
- [2207.09238] Formal Algorithms for Transformers https://arxiv.org/abs/2207.09238 27 comments
- [1810.04805] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805 25 comments
- LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything – Google AI Blog https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html 11 comments
- Highly accurate protein structure prediction with AlphaFold | Nature https://www.nature.com/articles/s41586-021-03819-2 9 comments
- Training Generalist Agents with Multi-Game Decision Transformers – Google AI Blog https://ai.googleblog.com/2022/07/training-generalist-agents-with-multi.html 6 comments
- Machine Learning Glossary | Google Developers https://developers.google.com/machine-learning/glossary/ 3 comments
- Transformer: A Novel Neural Network Architecture for Language Understanding – Google AI Blog https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 3 comments
- Convolutional Neural Network (CNN) | TensorFlow Core https://www.tensorflow.org/tutorials/images/cnn 2 comments
- OptFormer: Towards Universal Hyperparameter Optimization with Transformers – Google AI Blog https://ai.googleblog.com/2022/08/optformer-towards-universal.html 1 comment
- Transformers for Image Recognition at Scale – Google AI Blog https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html 1 comment
- Music Transcription with Transformers https://magenta.tensorflow.org/transcription-with-transformers 1 comment
- [1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer https://arxiv.org/abs/1910.10683 1 comment
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing – Google AI Blog https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html 0 comments
- [1610.10099] Neural Machine Translation in Linear Time https://arxiv.org/abs/1610.10099 0 comments
Related searches:
Search whole site: site:tensorflow.org
Search title: Neural machine translation with a Transformer and Keras | Text | TensorFlow
See how to search.