ai4ce/EgoPAT3D
[CVPR 2022] Egocentric Action Target Prediction in 3D
This project offers a rich dataset and methods for predicting where a person's hand will go during an object manipulation task, viewed from a first-person perspective. It takes in egocentric RGB-D video and IMU data and outputs the likely 3D target location of the hand action. Roboticists and researchers developing assistive technologies or human-robot collaboration systems would find this valuable.
Use this if you are working on anticipating human intent for physical interaction tasks in 3D space, especially from a user's own viewpoint.
Not ideal if your focus is on general object recognition or activity classification rather than precise 3D target prediction of human manipulation.
Stars
32
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ai4ce/EgoPAT3D"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.