stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input

Activity Recognition from 2D pose using an LSTM RNN

41
/ 100
Emerging

This project helps researchers and engineers classify human actions like jumping or waving, as well as animal behaviors, using standard video camera footage. It takes in a series of 2D body joint positions (like a stick figure) extracted from video frames and outputs the likely activity being performed. This is useful for anyone studying movement patterns, human-robot interaction, or animal behavior.

296 stars. No commits in the last 6 months.

Use this if you need to automatically identify human or animal actions from standard 2D video, especially when working with limited datasets where simpler models are beneficial.

Not ideal if you require highly precise 3D motion analysis, need to classify very subtle movements, or are working with video where pose estimation is frequently inaccurate.

human-activity-recognition animal-behavior motion-analysis robotics-interaction video-analytics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 23 / 25

How are scores calculated?

Stars

296

Forks

76

Language

Jupyter Notebook

License

Last pushed

Jun 02, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.