ethanhe42/epipolar-transformers
Epipolar Transformers (best paper award, CVPR 2020 workshop)
This project helps researchers and engineers accurately reconstruct 3D human and hand poses from standard 2D video footage. You provide 2D image sequences, and the system outputs precise 3D coordinates and skeletal models of the subject's pose. It is ideal for biomechanics researchers, animation studios, or anyone needing to analyze fine-grained human movement.
427 stars. No commits in the last 6 months.
Use this if you need highly accurate 3D human or hand pose estimation from standard video, especially for applications requiring detailed movement analysis.
Not ideal if you only need 2D pose tracking or are working with specialized sensor data like depth cameras or motion capture suits.
Stars
427
Forks
38
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ethanhe42/epipolar-transformers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.