kenshohara/3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
This project helps researchers and engineers analyze video content by automatically recognizing actions performed in video clips. It takes raw video files or image sequences as input and outputs a classification of the actions detected. The primary users are professionals working with large video datasets, such as those in computer vision research, surveillance, or content moderation.
4,043 stars. No commits in the last 6 months.
Use this if you need to automatically identify and categorize human actions or other activities within video footage, especially if you work with large-scale datasets like Kinetics, Moments in Time, or ActivityNet.
Not ideal if you're looking for real-time action recognition on live streams or if your primary need is object detection rather than action classification.
Stars
4,043
Forks
935
Language
Python
License
MIT
Category
Last pushed
Jan 20, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kenshohara/3D-ResNets-PyTorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
talmolab/sleap
A deep learning framework for multi-animal pose tracking.
kennymckormick/pyskl
A toolbox for skeleton-based action recognition.
open-mmlab/mmaction2
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
jgraving/DeepPoseKit
a toolkit for pose estimation using deep learning
DenisTome/Lifting-from-the-Deep-release
Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"