jgraving/DeepPoseKit

a toolkit for pose estimation using deep learning

58
/ 100
Established

This toolkit helps scientists and researchers track the movement of animals and objects in videos or images. You provide video footage or image sets, along with manually marked keypoints (like a bird's beak or a specific joint), and it automatically identifies those keypoints across new, unseen frames. It's ideal for anyone analyzing behavior or motion in biological or experimental settings.

405 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to precisely track specific body parts or features of individual animals or objects in images or videos, minimizing manual annotation effort.

Not ideal if you need to track multiple individuals that look identical and cannot be easily distinguished without prior localization or tracking software.

animal-behavior motion-tracking biomechanics laboratory-research computer-vision
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 23 / 25

How are scores calculated?

Stars

405

Forks

88

Language

Python

License

Apache-2.0

Last pushed

Jul 07, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jgraving/DeepPoseKit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.