huangwl18/VoxPoser

VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

47
/ 100
Emerging

This project helps robotics engineers and researchers program robotic arms to perform complex manipulation tasks by translating high-level natural language instructions into detailed action sequences. It takes a verbal command, like "pick up the red block and place it on the blue mat," and outputs a precise series of robot movements, effectively bridging the gap between human language and robot actions. Robotics developers can use this to quickly prototype and deploy new manipulation capabilities.

786 stars. No commits in the last 6 months.

Use this if you need to enable a robotic arm to understand and execute zero-shot manipulation tasks described in natural language, without needing extensive training data for each new task.

Not ideal if your robot's environment lacks a robust real-time object perception system, as this demo relies on pre-segmented object masks rather than real-world detection.

robotic-manipulation robot-programming automation human-robot-interaction AI-robotics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

786

Forks

106

Language

Python

License

MIT

Last pushed

Feb 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/huangwl18/VoxPoser"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.