Fsoft-AIC/Grasp-Anything
Dataset and Code for ICRA 2024 paper "Grasp-Anything: Large-scale Grasp Dataset from Foundation Models."
This project helps roboticists and automation engineers develop and test robot grasping capabilities. It provides a large-scale dataset of successful grasp configurations for various objects, generated using advanced AI models. By feeding this data into robot control algorithms, you can train robots to effectively pick up diverse items.
219 stars. No commits in the last 6 months.
Use this if you are developing robotic systems that need to accurately grasp and manipulate a wide range of objects in environments like manufacturing, logistics, or service robotics.
Not ideal if you are looking for an out-of-the-box, plug-and-play robot grasping solution, as this project provides the dataset and training framework for developing your own models.
Stars
219
Forks
21
Language
Python
License
MIT
Category
Last pushed
Jun 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Fsoft-AIC/Grasp-Anything"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice