YicongHong/Discrete-Continuous-VLN

Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation

38
/ 100
Emerging

This project helps robotics researchers and AI developers train autonomous agents to navigate complex indoor environments based on natural language instructions. You input a textual command (e.g., "go to the kitchen") and 3D visual data of an indoor space. The output is a precisely determined path and sequence of actions for the agent to follow, bridging the gap between high-level language and continuous physical movement.

147 stars. No commits in the last 6 months.

Use this if you are working on making robots understand and execute human language commands for indoor navigation, particularly in simulated or real-world 3D spaces like those found in Matterport3D.

Not ideal if your focus is on outdoor navigation, general object recognition, or natural language processing tasks unrelated to embodied AI.

robotics-navigation embodied-ai visual-language-instruction 3d-environment-simulation autonomous-agents
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

147

Forks

12

Language

Python

License

MIT

Last pushed

Oct 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/YicongHong/Discrete-Continuous-VLN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.