lambdaloop/anipose
🐜🐀🐒🚶 A toolkit for robust markerless 3D pose estimation
This helps researchers precisely measure how animals move in 3D space without needing to attach physical markers. You provide video footage from multiple cameras, and it outputs detailed 3D position data for specific body parts. It's used by behavioral scientists, neuroscientists, and biomechanics researchers studying animal locomotion and behavior.
433 stars.
Use this if you need to quantitatively analyze the 3D movements of animals or even human hands from multi-camera video recordings.
Not ideal if you only have single-camera video data or need real-time pose estimation for interactive applications.
Stars
433
Forks
81
Language
JavaScript
License
BSD-2-Clause
Category
Last pushed
Jan 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/lambdaloop/anipose"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
DeepLabCut/DeepLabCut
Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with...
openpifpaf/openpifpaf
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and...
DIYer22/bpycv
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
NeLy-EPFL/DeepFly3D
Motion capture (markerless 3D pose estimation) pipeline and helper GUI for tethered Drosophila.
NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation
Deep learned, NVIDIA-accelerated 3D object pose estimation