kenshohara/3D-ResNets

3D ResNets for Action Recognition

44
/ 100
Emerging

This project helps researchers and developers working with video data to train and test advanced AI models that can automatically identify actions happening within video clips. It takes video files, converts them into image sequences, and then processes these sequences to recognize specific human activities or events. This is primarily for computer vision researchers and AI model trainers focused on video analysis.

122 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to train and evaluate 3D ResNet models for action recognition on large video datasets like ActivityNet or Kinetics.

Not ideal if you want a tool to classify your own videos using pre-trained models without needing to train new ones or if you are not comfortable with command-line operations and PyTorch/Torch.

video-analysis action-recognition computer-vision machine-learning-research deep-learning-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

122

Forks

21

Language

Lua

License

MIT

Last pushed

Nov 29, 2017

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kenshohara/3D-ResNets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.