kyegomez/RT-X

Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"

51
/ 100
Established

This project offers tools to control robots using a combination of visual and text instructions. You provide the system with video or image feeds of a robot's environment, along with text commands describing the task. The output guides robot actions, enabling it to perform real-world tasks. This is ideal for robotics researchers and developers working on autonomous agents.

237 stars.

Use this if you are developing robotic systems that need to understand and execute complex commands based on both visual perception and natural language instructions.

Not ideal if you need a pre-packaged, ready-to-deploy solution for a specific robot or a simple, direct control interface without multimodal AI capabilities.

robotics autonomous-systems human-robot-interaction robotic-process-automation machine-perception
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

237

Forks

24

Language

Python

License

MIT

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/RT-X"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.