Tomiinek/MultiWOZ_Evaluation

Unified MultiWOZ evaluation scripts for the context-to-response task.

42
/ 100
Emerging

This tool helps researchers and developers evaluate how well their conversational AI models generate responses in a multi-turn dialogue system like MultiWOZ. You provide your model's generated responses and predicted dialogue states, and it calculates key metrics like BLEU score, Inform & Success rates, and lexical richness. It's designed for anyone working on improving dialogue systems, particularly those focused on response generation.

No commits in the last 6 months.

Use this if you need a standardized and easy-to-use way to evaluate the quality of responses generated by your conversational AI model on the MultiWOZ benchmark.

Not ideal if you are evaluating a dialogue system on a dataset other than MultiWOZ or primarily focused on metrics for tasks like intent recognition or entity extraction.

conversational-ai dialogue-system-evaluation natural-language-generation chatbots nlg-metrics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

59

Forks

13

Language

Python

License

MIT

Last pushed

Oct 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Tomiinek/MultiWOZ_Evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.