MasterHow/EventPointPose

[3DV 2022] Pytorch implementation for 3D Event-based Human Pose Estimation

35
/ 100
Emerging

This project helps autonomous driving systems understand human movements by estimating 3D human poses. It takes input from specialized event cameras, which generate 3D event point clouds, and outputs precise 3D joint locations of people. This is for engineers and researchers developing advanced perception systems for self-driving vehicles.

No commits in the last 6 months.

Use this if you are working on autonomous driving or robotics and need to accurately track human body poses using event camera data, especially in challenging visual conditions.

Not ideal if your primary input is standard RGB video or images, or if your application does not involve event-based vision.

autonomous-driving robotics-perception human-pose-estimation event-camera-vision 3d-computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

65

Forks

7

Language

Python

License

MIT

Last pushed

Dec 04, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/MasterHow/EventPointPose"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.