zd11024/NaviLLM
[CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'
This project helps roboticists and AI researchers develop smarter autonomous agents that can navigate and interact with complex 3D environments. It takes various forms of instructions (like "go to the kitchen" or "find the red mug") and visual information from a 3D space, then outputs navigation paths, answers to questions about the environment, or descriptions of objects. It's designed for those building embodied AI agents for robotics or virtual environments.
229 stars. No commits in the last 6 months.
Use this if you need a single AI model to handle multiple embodied navigation tasks, like following instructions or answering questions about a 3D space, rather than building separate models for each task.
Not ideal if you're looking for a simple plug-and-play solution for a single, highly specialized navigation task, as it's designed for broad generalizability across many tasks.
Stars
229
Forks
15
Language
Python
License
MIT
Category
Last pushed
Jun 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zd11024/NaviLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice