oakink/OakInk
[CVPR 2022] OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
This project offers a comprehensive repository of human hand and common object interactions, providing 3D models of objects, their parts, and attributes, alongside videos and 3D data of hands grasping and manipulating these objects. It helps researchers and engineers create more realistic and intelligent computer vision and robotics applications. The data includes detailed information about how humans naturally interact with everyday items, such as dynamic grasping and handover motions. It's intended for those working on tasks like generating human-like grasping poses or teaching robots to interact with objects more effectively.
134 stars. No commits in the last 6 months.
Use this if you are developing AI models or robotic systems that need to understand, predict, or generate realistic human-object interactions, especially for grasping, manipulation, and handover scenarios.
Not ideal if your primary focus is on general object recognition or scene understanding without a specific need for detailed hand-object interaction data.
Stars
134
Forks
6
Language
Python
License
MIT
Category
Last pushed
Dec 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/oakink/OakInk"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.