LSTM-Human-Activity-Recognition and RNN-for-Human-Activity-Recognition-using-2D-Pose-Input
These two tools are competitors, as both implement LSTM RNNs for human activity recognition, differing in their specific input modalities (smartphone sensor data vs. 2D pose input).
About LSTM-Human-Activity-Recognition
guillaume-chevalier/LSTM-Human-Activity-Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
This project helps anyone working with sensor data from smartphones to automatically identify six common human activities: walking, walking upstairs, walking downstairs, sitting, standing, and laying. It takes raw accelerometer and gyroscope data as input and outputs a classification of the activity being performed. This is useful for researchers, product developers, or data analysts in fields like health, fitness, or behavioral science.
About RNN-for-Human-Activity-Recognition-using-2D-Pose-Input
stuarteiffert/RNN-for-Human-Activity-Recognition-using-2D-Pose-Input
Activity Recognition from 2D pose using an LSTM RNN
This project helps researchers and engineers classify human actions like jumping or waving, as well as animal behaviors, using standard video camera footage. It takes in a series of 2D body joint positions (like a stick figure) extracted from video frames and outputs the likely activity being performed. This is useful for anyone studying movement patterns, human-robot interaction, or animal behavior.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work