RobotLocomotion/pytorch-dense-correspondence
Code for "Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation"
This helps engineers develop and train robot perception systems to identify and locate specific points on objects, even if the objects are new to the robot or have changed shape. It takes in visual data of objects and produces a rich, detailed understanding of their surfaces, allowing robots to perform precise manipulation tasks. Roboticists and automation engineers building advanced robotic systems would use this to improve grasping and handling of diverse items.
577 stars. No commits in the last 6 months.
Use this if you need to train robots to accurately grasp or interact with specific parts of unfamiliar or deformable objects.
Not ideal if your robots only handle rigid objects in highly structured, unchanging environments, where simpler object recognition methods suffice.
Stars
577
Forks
134
Language
Python
License
—
Category
Last pushed
May 09, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RobotLocomotion/pytorch-dense-correspondence"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
openspyrit/spyrit
A Python toolbox for deep image reconstruction, with emphasis on single-pixel imaging.
Fyusion/LLFF
Code release for Local Light Field Fusion at SIGGRAPH 2019
pmh47/dirt
DIRT: a fast differentiable renderer for TensorFlow
marrlab/SHAPR_torch
SHAPR: Code for "Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D...
natowi/3D-Reconstruction-with-Deep-Learning-Methods
List of projects for 3d reconstruction