Hacker News
- T5: The Text-to-Text Transfer Transformer https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html 66 comments
Linking pages
- MUM: A new AI milestone for understanding information https://blog.google/products/search/introducing-mum/ 208 comments
- Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes – Google Research Blog https://blog.research.google/2023/09/distilling-step-by-step-outperforming.html 123 comments
- Google Open-Sources Trillion-Parameter AI Language Model Switch Transformer https://www.infoq.com/news/2021/02/google-trillion-parameter-ai/ 95 comments
- Google Research: Themes from 2021 and Beyond – Google AI Blog https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html 52 comments
- Minority Voices 'Filtered' Out of Google Natural Language Processing Models - Unite.AI https://www.unite.ai/minority-voices-filtered-out-of-google-natural-language-processing-models/ 34 comments
- Imagen: Google introduces DALL-E 2 competition https://mixed-news.com/en/imagen-google-introduces-dall-e-2-competition/ 17 comments
- Vid2Seq: a pretrained visual language model for describing multi-event videos – Google AI Blog https://ai.googleblog.com/2023/03/vid2seq-pretrained-visual-language.html 16 comments
- More Efficient NLP Model Pre-training with ELECTRA – Google AI Blog https://ai.googleblog.com/2020/03/more-efficient-nlp-model-pre-training.html 12 comments
- KELM: Integrating Knowledge Graphs with Language Model Pre-training Corpora – Google AI Blog https://ai.googleblog.com/2021/05/kelm-integrating-knowledge-graphs-with.html 8 comments
- Open-Sourcing BiT: Exploring Large-Scale Pre-training for Computer Vision – Google AI Blog https://ai.googleblog.com/2020/05/open-sourcing-bit-exploring-large-scale.html 6 comments
- Google Research, 2022 & beyond: Language, vision and generative models – Google AI Blog https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html 5 comments
- Fast and Easy Infinitely Wide Networks with Neural Tangents – Google AI Blog https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html 3 comments
- A decade in deep learning, and what's next https://blog.google/technology/ai/decade-deep-learning-and-whats-next/ 1 comment
- Everything Product People Need to Know About Transformers (Part 3: BERT) | by Yacov Lewis | Towards Data Science https://towardsdatascience.com/everything-product-people-need-to-know-about-transformers-part-3-bert-a1227cead488 1 comment
- Better Language Models Without Massive Compute – Google AI Blog https://ai.googleblog.com/2022/11/better-language-models-without-massive.html 1 comment
- AdaTape: Foundation model with adaptive computation and dynamic read-and-write – Google Research Blog https://ai.googleblog.com/2023/08/adatape-foundation-model-with-adaptive.html 1 comment
- PaLI: Scaling Language-Image Learning in 100+ Languages – Google AI Blog https://ai.googleblog.com/2022/09/pali-scaling-language-image-learning-in.html 0 comments
- Two New Datasets for Conversational NLP: TimeDial and Disfl-QA – Google AI Blog https://ai.googleblog.com/2021/08/two-new-datasets-for-conversational-nlp.html 0 comments
- The C4_200M Synthetic Dataset for Grammatical Error Correction – Google AI Blog https://ai.googleblog.com/2021/08/the-c4200m-synthetic-dataset-for.html 0 comments
- TensorFlow Dev Summit 2020: Livestream Highlights | by Derrick Mwiti | Heartbeat https://heartbeat.fritz.ai/tensorflow-dev-summit-2020-livestream-highlights-8b99f7006743 0 comments
Linked pages
- Wikipedia https://wikipedia.org 1983 comments
- http://aidungeon.io/ 231 comments
- InferKit https://talktotransformer.com/ 166 comments
- Common Crawl https://commoncrawl.org/ 85 comments
- ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations – Google AI Blog https://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html 45 comments
- Reformer: The Efficient Transformer – Google AI Blog https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html 44 comments
- [1906.08237] XLNet: Generalized Autoregressive Pretraining for Language Understanding https://arxiv.org/abs/1906.08237 15 comments
- The Stanford Question Answering Dataset https://rajpurkar.github.io/SQuAD-explorer/ 4 comments
- T5 trivia http://t5-trivia.glitch.me/ 1 comment
- https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf 1 comment
- [1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer https://arxiv.org/abs/1910.10683 1 comment
- [1801.06146] Universal Language Model Fine-tuning for Text Classification https://arxiv.org/abs/1801.06146 0 comments
- Natural Questions: a New Corpus and Challenge for Question Answering Research – Google AI Blog https://ai.googleblog.com/2019/01/natural-questions-new-corpus-and.html 0 comments
- c4 | TensorFlow Datasets https://www.tensorflow.org/datasets/catalog/c4 0 comments
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing – Google AI Blog https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html 0 comments
- Train and run machine learning models faster | Cloud TPU | Google Cloud https://cloud.google.com/tpu/ 0 comments
- [1901.11504] Multi-Task Deep Neural Networks for Natural Language Understanding https://arxiv.org/abs/1901.11504 0 comments
- [1907.11692] RoBERTa: A Robustly Optimized BERT Pretraining Approach https://arxiv.org/abs/1907.11692 0 comments
- GitHub - google-research/text-to-text-transfer-transformer: Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" https://github.com/google-research/text-to-text-transfer-transformer 0 comments
- https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf 0 comments
Related searches:
Search whole site: site:ai.googleblog.com
Search title: Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer – Google AI Blog
See how to search.