curlsloth/Capstone_AcousticEnvironment-DeepNeuralNet

Deep neural network model combining audio signal processing and pre-trained audio CNN achieved 90.1% adjusted accuracy (27.6% improvement) for classifying audio recording environment.

12
/ 100
Experimental

This project helps environmental researchers, conservationists, or urban planners automatically identify whether an audio recording comes from an urban or natural environment. You input a 10-second audio clip, and the system outputs a classification indicating whether the soundscape is urban or natural. This is designed for professionals who need to categorize large volumes of environmental audio data without manual review.

No commits in the last 6 months.

Use this if you need to automatically sort or analyze audio recordings from soundscapes to determine if they originate from an urban or natural setting.

Not ideal if you require highly precise classification of specific audio events or need to identify detailed sound elements within the recording.

environmental-monitoring soundscape-ecology urban-planning conservation audio-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Jupyter Notebook

License

Last pushed

Mar 25, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/curlsloth/Capstone_AcousticEnvironment-DeepNeuralNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.