yudhisteer/Robotic-Grasping-Detection-with-PointNet

This project focuses on training robots to grasp everyday objects accurately. We gather a unique point cloud dataset using an iPhone's LiDAR and process it with Polycam. We develop a PointNet model from scratch to perform multi-class classification and part-segmentation, guiding the robot on where to grasp objects.

30
/ 100
Emerging

This project trains robots to identify optimal grasping points on everyday objects, much like humans instinctively know how to hold things. By analyzing 3D point cloud data captured from objects, it outputs specific locations a robot should grasp. This is for robotics engineers and researchers developing automated systems that need to interact with diverse physical objects.

No commits in the last 6 months.

Use this if you need to teach a robot how to intelligently determine where to grasp various objects based on their 3D shape, rather than just physically programming the hand.

Not ideal if your goal is to program the physical movements of a robotic hand or if you are working with 2D image data instead of 3D point clouds.

Robotics Grasping Automation 3D-Object-Recognition Point-Cloud-Analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

15

Forks

6

Language

Python

License

Last pushed

Oct 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/yudhisteer/Robotic-Grasping-Detection-with-PointNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.