YuZhong-Chen/LLM-Navigation
ππ£οΈπ‘πΎπ A framework for navigation tasks that can build the 3D scene graph in real-time and utilize large language model (LLM) to guide the robot.
This project helps operations engineers, automation specialists, or roboticists guide robots through physical spaces using natural language commands. It takes real-time visual input from a robot to build a detailed 3D understanding of its environment. The output is a guided robot capable of navigating and performing tasks based on human instructions.
No commits in the last 6 months.
Use this if you need to direct a robot to perform navigation tasks in a dynamic, real-world environment using everyday language instead of complex programming.
Not ideal if you are looking for a purely software-based simulation or if your robots operate in highly structured, static environments that don't require real-time scene understanding.
Stars
24
Forks
—
Language
C++
License
Apache-2.0
Category
Last pushed
Oct 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YuZhong-Chen/LLM-Navigation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langfengQ/verl-agent
verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is...
sotopia-lab/sotopia
Sotopia: an Open-ended Social Learning Environment (ICLR 2024 spotlight)
zhudotexe/redel
ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive...
TIGER-AI-Lab/verl-tool
A version of verl to support diverse tool use
AMAP-ML/Tree-GRPO
[ICLR 2026] Tree Search for LLM Agent Reinforcement Learning