jaehyunnn/ViTPose_pytorch
An unofficial implementation of ViTPose [Y. Xu et al., 2022]
This tool helps researchers and computer vision engineers detect and map the key joint points of people in images. You input an image containing one or more people, and it outputs an image with detected human poses highlighted. This is ideal for anyone working with human pose estimation in fields like action recognition, augmented reality, or sports analytics.
125 stars.
Use this if you need to accurately identify and visualize human body keypoints in images using a powerful, vision transformer-based model.
Not ideal if you require real-time pose estimation for video streams or need a solution that runs efficiently on resource-constrained devices without extensive setup.
Stars
125
Forks
21
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jaehyunnn/ViTPose_pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
tintn/vision-transformer-from-scratch
A Simplified PyTorch Implementation of Vision Transformer (ViT)
icon-lab/ResViT
Official Implementation of ResViT: Residual Vision Transformers for Multi-modal Medical Image Synthesis
gupta-abhay/pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
NVlabs/GroupViT
Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text...