Open3DA/LL3DA
[CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Language 3D Assistant.
This project offers an interactive Large Language 3D Assistant that can understand and respond to both visual cues and text commands within complex 3D environments. It takes in 3D data, such as point clouds, along with natural language questions or instructions, and provides detailed descriptions, answers, or plans for action. This tool is ideal for researchers and developers working on advanced AI systems that need to comprehend and interact with the physical world in three dimensions.
311 stars. No commits in the last 6 months.
Use this if you need to build AI models that can deeply understand 3D scenes from point cloud data and respond to human-like instructions or queries about those scenes.
Not ideal if your primary need is for 2D image analysis or if you don't have access to 3D point cloud data for your environments.
Stars
311
Forks
14
Language
Python
License
MIT
Category
Last pushed
Jul 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Open3DA/LL3DA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model