spaghettiSystems/serval

SER Evals: In-Domain and Out-of-Domain Benchmarking for Speech Emotion Recognition

28
/ 100
Experimental

This tool helps researchers and engineers in speech technology systematically evaluate and benchmark Speech Emotion Recognition (SER) models. You provide raw audio datasets and pre-trained speech models; the system then automates dataset preparation, feature extraction, model training, and performance evaluation. It's designed for anyone working on improving or comparing SER systems, such as AI researchers or machine learning engineers in speech processing.

No commits in the last 6 months.

Use this if you need to rigorously test and compare different Speech Emotion Recognition models across various datasets and pre-trained architectures to understand their in-domain and out-of-domain performance.

Not ideal if you are looking for a pre-trained SER model to use directly in an application or if you only need to perform basic inference with an existing model.

speech-emotion-recognition audio-processing AI-model-evaluation natural-language-processing machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

GPL-3.0

Last pushed

Aug 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/spaghettiSystems/serval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.