open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
This tool helps users analyze videos to understand actions and events happening within them. You input raw video footage, and it outputs classifications of activities, detected actions, or even specific moments from a video query. This is ideal for researchers, security analysts, or anyone needing to automatically extract insights from video content.
4,951 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to automatically identify, categorize, or locate specific actions and events in large collections of video data.
Not ideal if your primary need is basic video editing, simple playback, or managing video files without needing advanced content analysis.
Stars
4,951
Forks
1,345
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 14, 2024
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/open-mmlab/mmaction2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"