huangwl18/VoxPoser
VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models
This project helps robotics engineers and researchers program robotic arms to perform complex manipulation tasks by translating high-level natural language instructions into detailed action sequences. It takes a verbal command, like "pick up the red block and place it on the blue mat," and outputs a precise series of robot movements, effectively bridging the gap between human language and robot actions. Robotics developers can use this to quickly prototype and deploy new manipulation capabilities.
786 stars. No commits in the last 6 months.
Use this if you need to enable a robotic arm to understand and execute zero-shot manipulation tasks described in natural language, without needing extensive training data for each new task.
Not ideal if your robot's environment lacks a robust real-time object perception system, as this demo relies on pre-segmented object masks rather than real-world detection.
Stars
786
Forks
106
Language
Python
License
MIT
Category
Last pushed
Feb 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/huangwl18/VoxPoser"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model