shenxiangzhuang/bleuscore

BLEU Score in Rust

49
/ 100
Emerging

This tool helps machine translation researchers and practitioners quickly evaluate the quality of translated text. It takes a list of machine-generated translations and one or more human-generated reference translations, then outputs a BLEU score indicating how well the machine translation matches the references. This is ideal for those working on large-scale natural language processing projects, especially in machine translation.

Available on PyPI.

Use this if you need a significantly faster way to calculate BLEU scores for large volumes of machine translation outputs, particularly when working with Python.

Not ideal if you are evaluating only a small number of translations or if you require a different metric beyond BLEU.

machine-translation natural-language-processing language-AI translation-quality-assessment
No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Rust

License

MIT

Last pushed

Mar 01, 2026

Monthly downloads

18

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/shenxiangzhuang/bleuscore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.