CMU-Perceptual-Computing-Lab/openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
This project helps you automatically detect and track human body, face, and hand movements in real-time from videos, webcams, or images. It takes visual input and outputs detailed skeletal or keypoint data (like elbow, nose, or fingertip positions) for multiple people. Researchers in human-computer interaction, sports scientists, or animators can use this to analyze or reproduce human motion.
33,855 stars. No commits in the last 6 months.
Use this if you need to precisely track full-body human motion, including subtle hand and facial expressions, from standard video sources in real-time.
Not ideal if you only need to detect simple presence or gross movement without requiring detailed skeletal information for each body part.
Stars
33,855
Forks
8,054
Language
C++
License
—
Category
Last pushed
Aug 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/CMU-Perceptual-Computing-Lab/openpose"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.