andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
This project helps roboticists and automation engineers working on pick-and-place systems to identify and precisely locate objects in cluttered warehouse environments. It takes raw RGB-D camera data from a RealSense sensor and outputs 2D object segmentation masks and the 6D pose (position and orientation) of recognized objects. The primary users are researchers and engineers developing robotic manipulation systems for logistics and manufacturing.
308 stars. No commits in the last 6 months.
Use this if you are building an automated robotic system that needs to accurately locate various objects in a bin or on a shelf, especially in challenging conditions with occlusions and sensor noise.
Not ideal if you are looking for a general-purpose object detection system for consumer applications or if your environment does not involve industrial robotic manipulation.
Stars
308
Forks
140
Language
C++
License
BSD-2-Clause
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/andyzeng/apc-vision-toolbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and...
microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement...
RizwanMunawar/trajectory-forcast
Forecast object trajectory based on history of tracks. Provides a stable and computationally...