SPHAR-Dataset and S-SPHAR-Dataset
These are ecosystem siblings, specifically two distinct datasets from the same author, where one is an aggregation of real-world surveillance footage (A) and the other is a synthetically generated dataset (B), both for human action recognition.
About SPHAR-Dataset
AlexanderMelde/SPHAR-Dataset
Surveillance Perspective Human Action Recognition Dataset: 7759 Videos from 14 Action Classes, aggregated from multiple sources, all cropped spatio-temporally and filmed from a surveillance-camera like position.
SPHAR is a collection of 7,759 video clips designed for training and evaluating automated systems that identify human actions from surveillance footage. It takes raw video clips as input and provides categorized videos showing 14 distinct actions (like falling, running, or stealing) all from a surveillance camera's point of view. This dataset is for researchers and engineers developing AI for public safety, security monitoring, or behavior analysis in environments with fixed camera setups.
About S-SPHAR-Dataset
AlexanderMelde/S-SPHAR-Dataset
Synthetically Generated Surveillance Perspective Human Action Recognition Dataset: 6901 Videos from 10 action classes, made by a 3D Simulation, all cropped spatio-temporally and filmed from a surveillance-camera like position.
This dataset provides over 6,900 synthetically generated videos of human actions, filmed from typical surveillance camera angles. It's designed for training and testing systems that automatically identify activities in public spaces. Security analysts or researchers building intelligent video monitoring systems can use these videos to improve the accuracy of their action recognition models.
Scores updated daily from GitHub, PyPI, and npm data. How scores work