TianboJi/Dialogue-Eval

Code and data for paper "Achieving Reliable Human Assessment of Open-Domain Dialogue Systems"

33
/ 100
Emerging

This tool helps researchers and developers reliably evaluate different open-domain dialogue systems using human feedback. It takes your collected dialogue data and associated human ratings (e.g., for interestingness, fluency, robotic-ness) in a JSON format. It then outputs statistical reports like system Z-scores, rater agreement metrics, and significance test visualizations to compare the performance of different conversational AI models. This is ideal for anyone developing or researching conversational AI systems.

No commits in the last 6 months.

Use this if you need to rigorously analyze human evaluation data to compare multiple open-domain dialogue systems and ensure the reliability of your assessment.

Not ideal if you are looking for a tool to collect human feedback or if your evaluation criteria are not numerical ratings.

conversational-ai dialogue-system-evaluation human-in-the-loop-evaluation NLP-research chatbot-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

8

Forks

2

Language

Python

License

MIT

Last pushed

Nov 18, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/TianboJi/Dialogue-Eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.