andyzeng/visual-pushing-grasping

Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.

51
/ 100
Established

This project helps roboticists teach industrial robot arms to efficiently pick up objects, even when they are tightly packed or in difficult-to-reach positions. By feeding the robot visual data (like camera images), it learns through trial and error to use both pushing and grasping actions to clear clutter and successfully retrieve items. It's designed for engineers, researchers, or technicians working with robotic manipulation in manufacturing or logistics.

1,087 stars. No commits in the last 6 months.

Use this if you need to train a robot arm for robust pick-and-place tasks involving cluttered environments or objects that require strategic pushing before grasping.

Not ideal if your robot arm only performs simple, unobstructed grasping or if you require a pre-built, non-learning-based solution.

robotics industrial-automation pick-and-place material-handling robotic-manipulation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,087

Forks

329

Language

Python

License

BSD-2-Clause

Last pushed

May 11, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/andyzeng/visual-pushing-grasping"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.