Tiiiger/bert_score

BERT score for text generation

61
/ 100
Established

This tool helps you automatically assess the quality of generated text, such as summaries, translations, or chatbots' responses. It takes your generated text and a human-written reference text, then outputs scores that indicate how similar and good your generated text is. Anyone working with AI language models, like researchers or product developers, who needs to quickly evaluate text quality without relying solely on manual human review, would find this useful.

1,880 stars. Used by 18 other packages. No commits in the last 6 months. Available on PyPI.

Use this if you need an automated, robust way to measure how well your AI-generated text aligns with human-quality examples, providing precision, recall, and F1 scores.

Not ideal if your primary concern is assessing grammatical correctness, fluency, or other human-like qualities that don't directly relate to semantic similarity with a reference text.

natural-language-generation text-summarization machine-translation chatbot-evaluation content-creation-evaluation
Stale 6m
Maintenance 0 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 21 / 25

How are scores calculated?

Stars

1,880

Forks

237

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 30, 2024

Commits (30d)

0

Dependencies

8

Reverse dependents

18

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Tiiiger/bert_score"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.