ViTAE-Transformer/APTv2
The official repo for the extension of [NeurIPS'22] "APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking": https://github.com/pandorgan/APT-36K
This project offers a comprehensive benchmark dataset and evaluation tools for tracking animal movements and identifying key body points across video frames. It takes video clips of various animal species and outputs precise annotations of animal poses and their trajectories. This is ideal for researchers, zoologists, and ethologists studying animal behavior or developing advanced computer vision models for animal monitoring.
No commits in the last 6 months.
Use this if you need a high-quality, large-scale dataset to train or test algorithms for accurately detecting and tracking animal body keypoints in video.
Not ideal if your focus is on static image analysis or if you only need general animal detection rather than detailed pose estimation and tracking over time.
Stars
28
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
May 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ViTAE-Transformer/APTv2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"