jaehyunnn/ViTPose_pytorch

An unofficial implementation of ViTPose [Y. Xu et al., 2022]

53
/ 100
Established

This tool helps researchers and computer vision engineers detect and map the key joint points of people in images. You input an image containing one or more people, and it outputs an image with detected human poses highlighted. This is ideal for anyone working with human pose estimation in fields like action recognition, augmented reality, or sports analytics.

125 stars.

Use this if you need to accurately identify and visualize human body keypoints in images using a powerful, vision transformer-based model.

Not ideal if you require real-time pose estimation for video streams or need a solution that runs efficiently on resource-constrained devices without extensive setup.

human-pose-estimation computer-vision action-recognition image-analysis robotics
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

125

Forks

21

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jan 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jaehyunnn/ViTPose_pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.