mmaction2 and mmskeleton
These tools are **ecosystem siblings** within the OpenMMLab framework; MMSkeleton is a toolbox specifically for skeleton-based methods, including pose estimation, while MMAction2 is a broader, next-generation video understanding toolbox that likely incorporates or supersedes many of the functionalities and research directions of MMSkeleton for action recognition, with MMAction2 being the more actively developed and widely adopted solution.
About mmaction2
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
This tool helps users analyze videos to understand actions and events happening within them. You input raw video footage, and it outputs classifications of activities, detected actions, or even specific moments from a video query. This is ideal for researchers, security analysts, or anyone needing to automatically extract insights from video content.
About mmskeleton
open-mmlab/mmskeleton
A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
This project helps analyze and understand human movements from video or other visual data. It takes in video footage or image sequences showing people and outputs interpretations of their actions or detailed information about their body poses. It's ideal for researchers and developers working on human behavior analysis, sports analytics, or interactive systems.
Scores updated daily from GitHub, PyPI, and npm data. How scores work