qizekun/ShapeLLM

[ECCV 2024] ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

39
/ 100
Emerging

This project helps roboticists and augmented reality developers enable robots or AR systems to understand 3D objects in the real world through natural language. You provide the system with 3D scans (point clouds) of objects and text questions, and it outputs text answers describing or identifying those objects. It's designed for anyone building interactive systems that need to 'see' and 'talk about' their physical environment.

228 stars. No commits in the last 6 months.

Use this if you are developing an embodied AI system or an augmented reality application that needs to interpret 3D object data from sensors and respond to user queries in natural language.

Not ideal if your application primarily involves 2D image analysis, generating new 3D models, or requires highly precise 3D measurements rather than high-level object understanding.

robotics augmented-reality 3D-object-recognition embodied-AI human-robot-interaction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

228

Forks

17

Language

Python

License

Apache-2.0

Last pushed

Oct 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/qizekun/ShapeLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.