k4black/codebleu
Pip compatible CodeBLEU metric implementation available for linux/macos/win
This tool helps evaluate the quality of computer-generated code by comparing it against a reference or 'ground truth' code. It takes in two pieces of code (one predicted, one reference) in languages like Python, Java, or C++, and outputs a comprehensive score indicating how grammatically and logically similar they are. This is used by researchers and developers working on code generation models to objectively measure their model's performance.
130 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or evaluating code generation, code translation, or code summarization models and need a robust metric to compare model output to human-written code.
Not ideal if you are looking for a simple pass/fail test for code correctness, or if your primary goal is static code analysis for bugs and style rather than similarity assessment.
Stars
130
Forks
28
Language
Python
License
MIT
Category
Last pushed
Mar 31, 2025
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/k4black/codebleu"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LiveCodeBench/LiveCodeBench
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of...
EdinburghNLP/code-docstring-corpus
Preprocessed Python functions and docstrings for automated code documentation (code2doc) and...
hendrycks/apps
APPS: Automated Programming Progress Standard (NeurIPS 2021)
solis-team/Hydra
[FSE 2026] Do Not Treat Code as Natural Language: Implications for Repository-Level Code...
alxschwrz/codex_py2cpp
Converts python code into c++ by using OpenAI CODEX.