amitsou/DeepEgoHA-3R
A collection of the forefront of Egocentric Human Activity Recognition (HAR) and Action Anticipation through Deep Learning
This collection helps researchers and practitioners understand and predict human actions from first-person video footage. It compiles the latest academic papers on egocentric vision, where the camera is worn by a person, and focuses on recognizing current activities and anticipating future actions. Anyone working in computer vision research, particularly those studying human-computer interaction, robotics, or video surveillance, would find this useful.
No commits in the last 6 months.
Use this if you are a researcher or advanced practitioner looking for a curated list of leading publications on recognizing and anticipating human actions from video captured through a person's own perspective.
Not ideal if you are looking for ready-to-use software or a beginner's guide to computer vision; this is a research publication compendium.
Stars
10
Forks
—
Language
—
License
—
Category
Last pushed
Feb 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/amitsou/DeepEgoHA-3R"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OxWearables/stepcount
Improved Step Counting via Foundation Models for Wrist-Worn Accelerometers
OxWearables/actinet
An activity classification model based on self-supervised learning for wrist-worn accelerometer data.
aqibsaeed/Human-Activity-Recognition-using-CNN
Convolutional Neural Network for Human Activity Recognition in Tensorflow
felixchenfy/Realtime-Action-Recognition
Apply ML to the skeletons from OpenPose; 9 actions; multiple people. (WARNING: I'm sorry that...
guillaume-chevalier/LSTM-Human-Activity-Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM...