magdalenafuentes/urbansas

Urban Sound & Sight dataset and baseline

28
/ 100
Experimental

This project helps researchers and engineers analyze urban soundscapes by providing a standardized dataset of audio and video recordings from city environments. You input raw urban audio-visual data, and it outputs analyzed predictions about the sound events present. It's designed for academic researchers, urban planners, and audio engineers studying environmental sound or developing machine learning models for urban sensing.

No commits in the last 6 months.

Use this if you need to train or evaluate machine learning models for urban sound event detection and localization using a well-structured, real-world dataset.

Not ideal if you're looking for a simple, out-of-the-box application for real-time urban sound monitoring without any coding.

urban-soundscapes environmental-acoustics audio-analysis smart-cities urban-planning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Jupyter Notebook

License

BSD-3-Clause

Last pushed

Dec 05, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/magdalenafuentes/urbansas"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.