robot-perception-group/AirPose
This repository contains the code of AirPose, our multi-view fusion network for Human Pose and Shape Estimation method
This system helps analyze human movement in outdoor settings using drones. By inputting video footage from multiple autonomous drones equipped with RGB cameras, it generates a 3D model of a person's pose and body shape. This is ideal for researchers, biomechanists, or sports analysts who need to capture detailed human motion without physical markers.
No commits in the last 6 months.
Use this if you need to accurately capture and analyze 3D human motion and body shape in complex, outdoor environments using aerial drone footage.
Not ideal if you are working indoors, require real-time motion capture without prior processing, or do not have access to multi-drone aerial video.
Stars
56
Forks
5
Language
Python
License
MIT
Category
Last pushed
Feb 21, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/robot-perception-group/AirPose"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)