YuanGongND/whisper-at

Code and Pretrained Models for Interspeech 2023 Paper "Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong Audio Event Taggers"

50
/ 100
Established

This tool helps researchers and application developers analyze audio by providing both a transcript of spoken words and a list of identified sounds or events within the audio. You input an audio file (like an MP3), and it outputs the spoken text along with labels for other sounds like music, speech, or environmental noises, detected at specified time intervals. It's ideal for anyone working with audio recordings who needs to understand not just what was said, but also what else was happening sonically.

412 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to automatically transcribe spoken words from an audio file and simultaneously identify other ambient or event-based sounds present in that audio.

Not ideal if your primary need is highly granular, sub-second audio event detection or if you only require speech transcription without any interest in other audio events.

audio-analysis speech-transcription sound-event-detection media-analysis content-moderation
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

412

Forks

36

Language

Python

License

BSD-2-Clause

Last pushed

Feb 21, 2024

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/YuanGongND/whisper-at"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.