Fsoft-AIC/Language-Conditioned-Affordance-Pose-Detection-in-3D-Point-Clouds

[ICRA 2024] Language-Conditioned Affordance-Pose Detection in 3D Point Clouds

37
/ 100
Emerging

This project helps roboticists and automation engineers automatically understand how humans interact with objects. By inputting a 3D scan of an object and a text description of a potential interaction (e.g., "grasp the handle"), the system identifies the relevant areas on the object and generates precise 3D poses for that interaction. It's designed for those building robots that need to intelligently interact with unstructured environments.

No commits in the last 6 months.

Use this if you need to train a robotic system to autonomously identify interaction points and appropriate manipulation poses on unknown 3D objects based on high-level language commands.

Not ideal if you're looking for a general-purpose object detection tool or if your robots don't rely on 3D point cloud data for interaction planning.

robotics robotic-manipulation 3d-scene-understanding human-robot-interaction automation-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

51

Forks

7

Language

Python

License

MIT

Last pushed

Jan 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Fsoft-AIC/Language-Conditioned-Affordance-Pose-Detection-in-3D-Point-Clouds"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.