rashad101/RoMe
PyTorch code for ACL 2022 paper: RoMe: A Robust Metric for Evaluating Natural Language Generation https://aclanthology.org/2022.acl-long.387/
RoMe helps natural language generation (NLG) researchers and practitioners reliably evaluate the quality of text produced by AI systems. It takes AI-generated text and a reference human-written text as input, providing a score that reflects how good the AI text is in terms of meaning, grammar, and fluency. This tool is for anyone developing or assessing systems that generate human-like language.
No commits in the last 6 months.
Use this if you need a robust, automatic way to score the quality of text generated by your AI models, especially when traditional metrics fall short.
Not ideal if you require an evaluation that does not rely on comparing generated text to a specific reference, or if you are not working with natural language generation tasks.
Stars
10
Forks
5
Language
Python
License
MIT
Category
Last pushed
Aug 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/rashad101/RoMe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa