GT-RIPL/robo-vln
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
This project helps robotics researchers develop and test agents that can navigate complex indoor environments based on natural language instructions. You provide 3D scene data (like Matterport3D) and written navigation commands, and it outputs a robotic agent capable of understanding these instructions and moving through the simulated space. It's designed for researchers building intelligent agents for robotic vision and language navigation.
No commits in the last 6 months.
Use this if you are a robotics researcher or AI scientist working on training intelligent agents to understand natural language commands and navigate realistic 3D indoor environments.
Not ideal if you are looking for a pre-trained, ready-to-deploy robotic navigation system for a physical robot, or if your focus is outside of simulated environments and continuous control.
Stars
88
Forks
8
Language
Python
License
MIT
Category
Last pushed
Jun 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/GT-RIPL/robo-vln"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dorarad/gansformer
Generative Adversarial Transformers
j-min/VL-T5
PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
invictus717/MetaTransformer
Meta-Transformer for Unified Multimodal Learning
rkansal47/MPGAN
The message passing GAN https://arxiv.org/abs/2106.11535 and generative adversarial particle...
Yachay-AI/byt5-geotagging
Confidence and Byt5 - based geotagging model predicting coordinates from text alone.