yudhisteer/Robotic-Grasping-Detection-with-PointNet
This project focuses on training robots to grasp everyday objects accurately. We gather a unique point cloud dataset using an iPhone's LiDAR and process it with Polycam. We develop a PointNet model from scratch to perform multi-class classification and part-segmentation, guiding the robot on where to grasp objects.
This project trains robots to identify optimal grasping points on everyday objects, much like humans instinctively know how to hold things. By analyzing 3D point cloud data captured from objects, it outputs specific locations a robot should grasp. This is for robotics engineers and researchers developing automated systems that need to interact with diverse physical objects.
No commits in the last 6 months.
Use this if you need to teach a robot how to intelligently determine where to grasp various objects based on their 3D shape, rather than just physically programming the hand.
Not ideal if your goal is to program the physical movements of a robotic hand or if you are working with 2D image data instead of 3D point clouds.
Stars
15
Forks
6
Language
Python
License
—
Category
Last pushed
Oct 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/yudhisteer/Robotic-Grasping-Detection-with-PointNet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...