Lobsters
- RT-2: Vision-Language-Action Models https://robotics-transformer2.github.io/ 3 comments ai , hardware
Linking pages
- What is RT-2? Google DeepMind’s vision-language-action model for robotics https://blog.google/technology/ai/google-deepmind-rt2-robotics-vla-model/ 69 comments
- Google is training robots the way it trains AI chatbots - The Verge https://www.theverge.com/2023/7/28/23811109/google-smart-robot-generative-ai 65 comments
- State of the art in LLMs + Robotics - 2023 - hlfshell https://hlfshell.ai/posts/llms-and-robotics-papers-2023/ 4 comments
- GitHub - GT-RIPL/Awesome-LLM-Robotics: A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites https://github.com/GT-RIPL/Awesome-LLM-Robotics 0 comments
- August 2023 Round Up - by Chris Greening - Maker News https://makernews.substack.com/p/august-2023-round-up 0 comments
- Obvious next steps in AI research - by David Beniaguev https://davidbeniaguev.substack.com/p/obvious-next-steps-in-ai-research 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science
Weekly.
Related searches:
Search whole site: site:robotics-transformer2.github.io
Search title: RT-2: Vision-Language-Action Models
See how to search.