UCSD-E4E/PyHa

A repo designed to convert audio-based "weak" labels to "strong" intraclip labels. Provides a pipeline to compare automated moment-to-moment labels to human labels. Methods range from DSP based foreground-background separation, cross-correlation based template matching, as well as bird presence sound event detection deep learning models!

50
/ 100
Established

This tool helps wildlife biologists and ecologists automatically pinpoint the exact moments animal sounds occur within longer audio recordings. You provide audio files and general information about sound events (weak labels), and it outputs precise, moment-to-moment timestamps for each sound. This is designed for researchers and conservationists who analyze animal vocalizations.

Use this if you need to accurately identify the start and end times of specific animal vocalizations within large collections of environmental audio.

Not ideal if you primarily need to classify entire audio clips without detailed intra-clip timing, or if your analysis focuses on non-biological sounds.

wildlife-monitoring bioacoustics sound-event-detection conservation-research species-identification
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

21

Forks

13

Language

Jupyter Notebook

License

Last pushed

Feb 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UCSD-E4E/PyHa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.