bagh2178/GC-VLN

[CoRL 2025] GC-VLN: Instruction as Graph Constraints for Training-free Vision-and-Language Navigation

20
/ 100
Experimental

This project helps roboticists program robots to navigate complex 3D environments using natural language instructions. It takes human-readable directions, like "go to the kitchen and turn left at the island," and translates them into an optimal navigation path for a robot, even in unseen spaces. This is ideal for researchers or developers creating autonomous systems that need to understand and execute complex verbal commands without prior training in specific environments.

No commits in the last 6 months.

Use this if you need to guide robots through detailed 3D environments using natural language instructions, especially in scenarios where pre-training for every new space isn't feasible.

Not ideal if your robot navigation tasks involve simple, repetitive movements in well-mapped, static environments or if you are not working with sophisticated 3D visual input.

robotics autonomous-navigation human-robot-interaction 3D-environment-mapping natural-language-processing
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 7 / 25
Community 3 / 25

How are scores calculated?

Stars

64

Forks

1

Language

License

Last pushed

Sep 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bagh2178/GC-VLN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.