zai-org/GLM-ASR

GLM-ASR-Nano: A robust, open-source speech recognition model with 1.5B parameters

51
/ 100
Established

This project helps convert spoken words into written text, even in challenging conditions. It takes audio recordings, including those with quiet speech or various Chinese dialects, and outputs accurate transcriptions. Anyone who needs to transcribe spoken audio, like researchers analyzing interviews or businesses processing customer calls, would find this useful.

759 stars.

Use this if you need highly accurate transcriptions from audio, especially for quiet speech or Chinese dialects like Cantonese, across 17 languages.

Not ideal if your primary need is for a simple, quick transcription of clear, standard English speech and specialized dialect support is not a priority.

audio-transcription dialect-recognition speech-to-text meeting-minutes interview-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 18 / 25

How are scores calculated?

Stars

759

Forks

70

Language

Python

License

Apache-2.0

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/zai-org/GLM-ASR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.