sled-group/3D-GRAND

[CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs

21
/ 100
Experimental

This project offers a vast dataset and evaluation tools to improve how AI models understand and respond to instructions about physical 3D spaces. It takes 3D scene data and text descriptions, and produces AI models that can better interact with and describe real-world objects and environments. This is for researchers and developers building embodied AI agents and robots that need to accurately perceive and act within the physical world.

No commits in the last 6 months.

Use this if you are developing AI systems for robotics, augmented reality, or virtual assistants that need to understand and generate language grounded in complex 3D environments.

Not ideal if your AI application solely processes text or 2D images, or if you do not require dense, explicit connections between language and 3D objects.

robotics embodied-ai 3d-scene-understanding natural-language-interaction augmented-reality
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

53

Forks

2

Language

License

Last pushed

Jun 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/sled-group/3D-GRAND"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.