paolotron/D3G
Visual Relationship Reasoning for Grasp Planning
This project helps roboticists and automation engineers develop robotic systems that can intelligently grasp objects. By analyzing images of scenes containing objects, it identifies not just individual objects but also their spatial relationships to each other. This information then guides the robot in determining the best way to pick up specific items, making robotic manipulation more robust and adaptive.
No commits in the last 6 months.
Use this if you are a robotics researcher or engineer working on advanced manipulation tasks where robots need to understand object interactions to grasp items effectively in complex environments.
Not ideal if you need a simple object detection or a basic grasping solution that doesn't require understanding relationships between objects.
Stars
18
Forks
1
Language
Python
License
MIT
Category
Last pushed
May 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/paolotron/D3G"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/arc-robot-vision
MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 -...
dougsm/ggcnn
Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative...
graspnet/graspnet-baseline
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_depth
NVIDIA-accelerated, deep learned stereo disparity estimation
PickNikRobotics/deep_grasp_demo
Deep learning for grasp detection within MoveIt.