YuZhong-Chen/LLM-Navigation

πŸš—πŸ—£οΈπŸ“‘πŸ—ΎπŸ A framework for navigation tasks that can build the 3D scene graph in real-time and utilize large language model (LLM) to guide the robot.

22
/ 100
Experimental

This project helps operations engineers, automation specialists, or roboticists guide robots through physical spaces using natural language commands. It takes real-time visual input from a robot to build a detailed 3D understanding of its environment. The output is a guided robot capable of navigating and performing tasks based on human instructions.

No commits in the last 6 months.

Use this if you need to direct a robot to perform navigation tasks in a dynamic, real-world environment using everyday language instead of complex programming.

Not ideal if you are looking for a purely software-based simulation or if your robots operate in highly structured, static environments that don't require real-time scene understanding.

robot-guidance warehouse-automation field-robotics autonomous-navigation operations-automation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

C++

License

Apache-2.0

Last pushed

Oct 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YuZhong-Chen/LLM-Navigation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.