YuanGongND/whisper-at
Code and Pretrained Models for Interspeech 2023 Paper "Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong Audio Event Taggers"
This tool helps researchers and application developers analyze audio by providing both a transcript of spoken words and a list of identified sounds or events within the audio. You input an audio file (like an MP3), and it outputs the spoken text along with labels for other sounds like music, speech, or environmental noises, detected at specified time intervals. It's ideal for anyone working with audio recordings who needs to understand not just what was said, but also what else was happening sonically.
412 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to automatically transcribe spoken words from an audio file and simultaneously identify other ambient or event-based sounds present in that audio.
Not ideal if your primary need is highly granular, sub-second audio event detection or if you only require speech transcription without any interest in other audio events.
Stars
412
Forks
36
Language
Python
License
BSD-2-Clause
Category
Last pushed
Feb 21, 2024
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/YuanGongND/whisper-at"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
adi-gov-tw/Taiwan-Tongues-ASR-CE
Taiwan Tongues ASR CE 是一個開源語音辨識(Automatic Speech Recognition, ASR)模型專案,專為台灣多元語言環境設計。 本模型支援...
huggingface/distil-whisper
Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
phineas-pta/fine-tune-whisper-vi
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
KevKibe/African-Whisper
🚀 Framework for seamless fine-tuning of Whisper model on a multi-lingual dataset and deployment to prod.
huuquyet/PhoWhisper-next
Demo using PhoWhisper models of VinAI built with Transformers.js + Next.js