UCSD-E4E/PyHa
A repo designed to convert audio-based "weak" labels to "strong" intraclip labels. Provides a pipeline to compare automated moment-to-moment labels to human labels. Methods range from DSP based foreground-background separation, cross-correlation based template matching, as well as bird presence sound event detection deep learning models!
This tool helps wildlife biologists and ecologists automatically pinpoint the exact moments animal sounds occur within longer audio recordings. You provide audio files and general information about sound events (weak labels), and it outputs precise, moment-to-moment timestamps for each sound. This is designed for researchers and conservationists who analyze animal vocalizations.
Use this if you need to accurately identify the start and end times of specific animal vocalizations within large collections of environmental audio.
Not ideal if you primarily need to classify entire audio clips without detailed intra-clip timing, or if your analysis focuses on non-biological sounds.
Stars
21
Forks
13
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/UCSD-E4E/PyHa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
birdnet-team/BirdNET-Analyzer
BirdNET analyzer for scientific audio data processing.
tphakala/birdnet-go
Realtime BirdNET soundscape analyzer
birdnet-team/birdnet
A Python library for identifying bird species by their sounds.
DrCoffey/DeepSqueak
DeepSqueak v3: Using Machine Vision to Accelerate Bioacoustics Research
ear-team/bambird
Unsupervised classification to improve the quality of a bird song recording dataset....