andyzeng/arc-robot-vision
MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.
This project helps roboticists and automation engineers build systems that can identify and pick up unfamiliar items in a cluttered environment. It takes in visual data (RGB-D images) from a robot's camera and outputs precise instructions for how the robot's gripper should grasp objects, and then how to recognize them. Warehouse automation specialists, robotics researchers, and anyone designing intelligent pick-and-place systems would find this valuable.
321 stars. No commits in the last 6 months.
Use this if you need a robot to reliably pick up and identify various objects, including those it hasn't seen before, in a busy and unstructured setting like a warehouse.
Not ideal if your robot only handles a limited set of pre-programmed objects or operates in highly structured, predictable environments.
Stars
321
Forks
96
Language
Lua
License
Apache-2.0
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/andyzeng/arc-robot-vision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
dougsm/ggcnn
Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative...
graspnet/graspnet-baseline
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_depth
NVIDIA-accelerated, deep learned stereo disparity estimation
PickNikRobotics/deep_grasp_demo
Deep learning for grasp detection within MoveIt.
wkentaro/reorientbot
ReorientBot: Learning Object Reorientation for Specific-Posed Placement, ICRA 2022