jefflai108/Semi-Supervsied-Spoken-Language-Understanding-PyTorch

Semi-supervised spoken language understanding (SLU) via self-supervised speech and language model pretraining

36
/ 100
Emerging

This project helps build spoken language understanding systems that can interpret user commands, even with limited labeled data. It takes raw audio of speech and transcribes it, extracting the intent and specific details (like a city name or product). This is useful for anyone creating voice assistants, interactive voice response (IVR) systems, or other speech-driven applications.

No commits in the last 6 months.

Use this if you need to develop a voice assistant or IVR system that understands spoken commands, but you have difficulty collecting large amounts of labeled speech data.

Not ideal if you are looking for a general-purpose speech-to-text transcriber without needing to extract specific intents or 'slots' of information.

voice-assistant-development IVR speech-recognition natural-language-understanding dialog-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

12

Forks

4

Language

Python

License

MIT

Last pushed

Mar 23, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/jefflai108/Semi-Supervsied-Spoken-Language-Understanding-PyTorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.