MasterHow/EventPointPose
[3DV 2022] Pytorch implementation for 3D Event-based Human Pose Estimation
This project helps autonomous driving systems understand human movements by estimating 3D human poses. It takes input from specialized event cameras, which generate 3D event point clouds, and outputs precise 3D joint locations of people. This is for engineers and researchers developing advanced perception systems for self-driving vehicles.
No commits in the last 6 months.
Use this if you are working on autonomous driving or robotics and need to accurately track human body poses using event camera data, especially in challenging visual conditions.
Not ideal if your primary input is standard RGB video or images, or if your application does not involve event-based vision.
Stars
65
Forks
7
Language
Python
License
MIT
Category
Last pushed
Dec 04, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/MasterHow/EventPointPose"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.