kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
This project helps computer vision practitioners analyze human movements in videos by taking in skeleton data (digital representations of body joints) and identifying the actions being performed. The output is a classification of the action, such as 'walking' or 'waving'. This is ideal for researchers, analysts, or developers working on systems that need to understand human behavior from video footage.
1,213 stars.
Use this if you need to automatically recognize human actions or gestures from video, especially when you have or can extract skeleton data.
Not ideal if your primary goal is general object detection or if you don't have access to or a method for extracting human skeleton data from your video sources.
Stars
1,213
Forks
221
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kennymckormick/pyskl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"