RobotLocomotion/pytorch-dense-correspondence

Code for "Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation"

51
/ 100
Established

This helps engineers develop and train robot perception systems to identify and locate specific points on objects, even if the objects are new to the robot or have changed shape. It takes in visual data of objects and produces a rich, detailed understanding of their surfaces, allowing robots to perform precise manipulation tasks. Roboticists and automation engineers building advanced robotic systems would use this to improve grasping and handling of diverse items.

577 stars. No commits in the last 6 months.

Use this if you need to train robots to accurately grasp or interact with specific parts of unfamiliar or deformable objects.

Not ideal if your robots only handle rigid objects in highly structured, unchanging environments, where simpler object recognition methods suffice.

robotics robotic-manipulation computer-vision automation-engineering object-recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

577

Forks

134

Language

Python

License

Last pushed

May 09, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RobotLocomotion/pytorch-dense-correspondence"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.