GT-RIPL/robo-vln

Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

37
/ 100
Emerging

This project helps robotics researchers develop and test agents that can navigate complex indoor environments based on natural language instructions. You provide 3D scene data (like Matterport3D) and written navigation commands, and it outputs a robotic agent capable of understanding these instructions and moving through the simulated space. It's designed for researchers building intelligent agents for robotic vision and language navigation.

No commits in the last 6 months.

Use this if you are a robotics researcher or AI scientist working on training intelligent agents to understand natural language commands and navigate realistic 3D indoor environments.

Not ideal if you are looking for a pre-trained, ready-to-deploy robotic navigation system for a physical robot, or if your focus is outside of simulated environments and continuous control.

robotics research vision and language navigation embodied AI 3D simulation robot instruction following
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

88

Forks

8

Language

Python

License

MIT

Last pushed

Jun 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/GT-RIPL/robo-vln"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.