YicongHong/Discrete-Continuous-VLN
Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation
This project helps robotics researchers and AI developers train autonomous agents to navigate complex indoor environments based on natural language instructions. You input a textual command (e.g., "go to the kitchen") and 3D visual data of an indoor space. The output is a precisely determined path and sequence of actions for the agent to follow, bridging the gap between high-level language and continuous physical movement.
147 stars. No commits in the last 6 months.
Use this if you are working on making robots understand and execute human language commands for indoor navigation, particularly in simulated or real-world 3D spaces like those found in Matterport3D.
Not ideal if your focus is on outdoor navigation, general object recognition, or natural language processing tasks unrelated to embodied AI.
Stars
147
Forks
12
Language
Python
License
MIT
Category
Last pushed
Oct 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/YicongHong/Discrete-Continuous-VLN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
StanfordASL/Trajectron
Code accompanying "The Trajectron: Probabilistic Multi-Agent Trajectory Modeling with Dynamic...
StanfordASL/Trajectron-plus-plus
Code accompanying the ECCV 2020 paper "Trajectron++: Dynamically-Feasible Trajectory Forecasting...
uber-research/LaneGCN
[ECCV2020 Oral] Learning Lane Graph Representations for Motion Forecasting
agrimgupta92/sgan
Code for "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks",...
devendrachaplot/Neural-SLAM
Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"