yujmo/CZU_MHAD
CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors
This dataset provides rich information to understand and categorize human movements. It includes synchronized video showing depth, 3D positions of body joints, and motion data (acceleration, angular velocity) from ten wearable sensors placed on the body. Scientists and researchers in fields like sports analysis or robotics would use this to develop and test systems that recognize complex human actions.
No commits in the last 6 months.
Use this if you need a comprehensive, multimodal dataset of various human actions, captured simultaneously by both a depth camera and multiple wearable inertial sensors, for developing advanced action recognition algorithms.
Not ideal if you only need simple video-based action recognition or do not have the technical expertise to work with raw sensor data and 3D skeleton tracking.
Stars
26
Forks
2
Language
MATLAB
License
—
Category
Last pushed
Jun 02, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/yujmo/CZU_MHAD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quic/sense
Enhance your application with the ability to see and interact with humans using any RGB camera.
AlexanderMelde/SPHAR-Dataset
Surveillance Perspective Human Action Recognition Dataset: 7759 Videos from 14 Action Classes,...
CV-ZMH/human-action-recognition
Multi Person Skeleton Based Action Recognition and Tracking
Event-AHU/HARDVS
[AAAI-2024] HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors
mmact19/2019
MMAct: A Large-Scale Dataset for Cross Modal Learning on Human Action Understanding