Hacker News
- 20B-parameter Alexa model sets new marks in few-shot learning https://www.amazon.science/blog/20b-parameter-alexa-model-sets-new-marks-in-few-shot-learning 86 comments
Linking pages
- AlexaTM 20B: Few-shot learning using a large-scale multilingual seq2seq model - Amazon Science https://www.amazon.science/publications/alexatm-20b-few-shot-learning-using-a-large-scale-multilingual-seq2seq-model 1 comment
- Why Amazon Scholar Yossi Keshet remains "excited about speech" - Amazon Science https://www.amazon.science/working-at-amazon/why-amazon-scholar-yossi-keshet-remains-excited-about-speech 0 comments
- Amazon's AlexaTM 20B Model Outperforms GPT-3 on NLP Benchmarks https://www.infoq.com/news/2022/08/alexa-tm-20b-model/ 0 comments
- GitHub - microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. https://github.com/microsoft/DeepSpeed 0 comments
- Latest News - DeepSpeed https://www.deepspeed.ai/ 0 comments
- Amazon's 20B-Parameter Alexa Model Sets New Marks In Few-Shot Learning Along With Low Carbon Footprint During Training (One-Fifth of GPT-3’s) - MarkTechPost https://www.marktechpost.com/2022/08/03/amazons-20b-parameter-alexa-model-sets-new-marks-in-few-shot-learning-along-with-low-carbon-footprint-during-training-one-fifth-of-gpt-3s/ 0 comments
- AWS CodeWhisperer creates computer code from natural language - Amazon Science https://www.amazon.science/latest-news/aws-codewhisperer-creates-computer-code-from-natural-language 0 comments
Linked pages
- [2005.14165] Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 201 comments
- Secure and resizable cloud compute – Amazon EC2 – Amazon Web Services https://aws.amazon.com/ec2/ 46 comments
- AlexaTM 20B: Few-shot learning using a large-scale multilingual seq2seq model - Amazon Science https://www.amazon.science/publications/alexatm-20b-few-shot-learning-using-a-large-scale-multilingual-seq2seq-model 1 comment
- [2208.01448] AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model https://arxiv.org/abs/2208.01448 0 comments
- Making DeepSpeed ZeRO run efficiently on more-affordable hardware - Amazon Science https://www.amazon.science/blog/making-deepspeed-zero-run-efficiently-on-more-affordable-hardware 0 comments
Related searches:
Search whole site: site:www.amazon.science
Search title: 20B-parameter Alexa model sets new marks in few-shot learning - Amazon Science
See how to search.