kyegomez/RT-2
Democratization of RT-2 "RT-2: New model translates vision and language into action"
This project helps roboticists and automation engineers by providing a model that translates visual information and natural language commands directly into robot actions. You input images from a robot's camera and a natural language instruction, and the model outputs the necessary control actions for the robot to execute the task. It's designed for anyone working with robots that need to understand and act upon visual and verbal cues in dynamic environments.
554 stars. No commits in the last 6 months.
Use this if you need your robots to interpret complex visual scenes and human language commands to perform physical tasks, such as in automated factories or for assisted care.
Not ideal if your application requires extremely precise, real-time control based solely on sensor data without any language interpretation.
Stars
554
Forks
68
Language
Python
License
MIT
Category
Last pushed
Jul 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/RT-2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle