Open3DA/LL3DA

[CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Language 3D Assistant.

36
/ 100
Emerging

This project offers an interactive Large Language 3D Assistant that can understand and respond to both visual cues and text commands within complex 3D environments. It takes in 3D data, such as point clouds, along with natural language questions or instructions, and provides detailed descriptions, answers, or plans for action. This tool is ideal for researchers and developers working on advanced AI systems that need to comprehend and interact with the physical world in three dimensions.

311 stars. No commits in the last 6 months.

Use this if you need to build AI models that can deeply understand 3D scenes from point cloud data and respond to human-like instructions or queries about those scenes.

Not ideal if your primary need is for 2D image analysis or if you don't have access to 3D point cloud data for your environments.

3D-scene-understanding robotics-perception AI-assistants spatial-reasoning human-machine-interaction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

311

Forks

14

Language

Python

License

MIT

Last pushed

Jul 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Open3DA/LL3DA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.