ai4ce/EgoPAT3D

[CVPR 2022] Egocentric Action Target Prediction in 3D

38
/ 100
Emerging

This project offers a rich dataset and methods for predicting where a person's hand will go during an object manipulation task, viewed from a first-person perspective. It takes in egocentric RGB-D video and IMU data and outputs the likely 3D target location of the hand action. Roboticists and researchers developing assistive technologies or human-robot collaboration systems would find this valuable.

Use this if you are working on anticipating human intent for physical interaction tasks in 3D space, especially from a user's own viewpoint.

Not ideal if your focus is on general object recognition or activity classification rather than precise 3D target prediction of human manipulation.

human-robot interaction assistive technology egocentric vision action anticipation 3D environment understanding
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

32

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ai4ce/EgoPAT3D"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.