yujmo/CZU_MHAD

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

30
/ 100
Emerging

This dataset provides rich information to understand and categorize human movements. It includes synchronized video showing depth, 3D positions of body joints, and motion data (acceleration, angular velocity) from ten wearable sensors placed on the body. Scientists and researchers in fields like sports analysis or robotics would use this to develop and test systems that recognize complex human actions.

No commits in the last 6 months.

Use this if you need a comprehensive, multimodal dataset of various human actions, captured simultaneously by both a depth camera and multiple wearable inertial sensors, for developing advanced action recognition algorithms.

Not ideal if you only need simple video-based action recognition or do not have the technical expertise to work with raw sensor data and 3D skeleton tracking.

human-action-recognition motion-analysis biomechanics robotics behavioral-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

26

Forks

2

Language

MATLAB

License

Last pushed

Jun 02, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/yujmo/CZU_MHAD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.