H-Freax/ThinkGrasp
[CoRL2024] ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter. https://arxiv.org/abs/2407.11298
This system helps robots strategically pick up specific objects from a cluttered environment. It takes in visual information from a camera (RGB and depth images) along with a text description of the object to grasp, and outputs precise instructions for the robot arm to pick up the desired item. This is primarily useful for robotics engineers and researchers developing automated manipulation systems.
113 stars. No commits in the last 6 months.
Use this if you need a robotic system to intelligently identify and grasp specific parts within a disorganized pile based on visual input and a textual prompt.
Not ideal if your application involves simple, pre-programmed pick-and-place tasks without the need for visual recognition or complex decision-making in clutter.
Stars
113
Forks
8
Language
Python
License
MIT
Category
Last pushed
Jul 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/H-Freax/ThinkGrasp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model