rashad101/RoMe

PyTorch code for ACL 2022 paper: RoMe: A Robust Metric for Evaluating Natural Language Generation https://aclanthology.org/2022.acl-long.387/

36
/ 100
Emerging

RoMe helps natural language generation (NLG) researchers and practitioners reliably evaluate the quality of text produced by AI systems. It takes AI-generated text and a reference human-written text as input, providing a score that reflects how good the AI text is in terms of meaning, grammar, and fluency. This tool is for anyone developing or assessing systems that generate human-like language.

No commits in the last 6 months.

Use this if you need a robust, automatic way to score the quality of text generated by your AI models, especially when traditional metrics fall short.

Not ideal if you require an evaluation that does not rely on comparing generated text to a specific reference, or if you are not working with natural language generation tasks.

natural-language-generation nlp-evaluation text-analytics ai-performance computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

10

Forks

5

Language

Python

License

MIT

Last pushed

Aug 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/rashad101/RoMe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.