jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
This toolkit helps scientists and researchers track the movement of animals and objects in videos or images. You provide video footage or image sets, along with manually marked keypoints (like a bird's beak or a specific joint), and it automatically identifies those keypoints across new, unseen frames. It's ideal for anyone analyzing behavior or motion in biological or experimental settings.
405 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to precisely track specific body parts or features of individual animals or objects in images or videos, minimizing manual annotation effort.
Not ideal if you need to track multiple individuals that look identical and cannot be easily distinguished without prior localization or tracking software.
Stars
405
Forks
88
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jgraving/DeepPoseKit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"