bagh2178/SG-Nav
[NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation
This project helps roboticists and AI researchers develop autonomous agents that can navigate and find specific objects in complex 3D virtual environments without prior training. You provide the 3D scene data and a target object, and the system guides the agent to locate it, even for novel objects or scenes it hasn't encountered before. It's designed for those building intelligent robots or simulation systems.
323 stars. No commits in the last 6 months.
Use this if you need an autonomous agent to perform zero-shot object navigation in diverse 3D environments without extensive pre-training.
Not ideal if your application requires navigation in real-world physical environments or if you are not working with 3D simulation datasets.
Stars
323
Forks
24
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bagh2178/SG-Nav"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model