logpai/LogBench
A benchmark for logging statement generation.
This project helps software developers evaluate how well different AI models can automatically generate logging statements within code. It takes your existing code snippets and candidate logging statements as input and measures their quality and how well they generalize to modified code. Software engineers and researchers working on code quality, observability, or AI-assisted development would use this to compare and improve logging generation tools.
No commits in the last 6 months.
Use this if you are a software engineer or researcher needing to benchmark and compare different AI models for their ability to generate high-quality and effective logging statements in software.
Not ideal if you are looking for a tool to automatically insert logs into your codebase without needing to evaluate the underlying generation model.
Stars
26
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/logpai/LogBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
k4black/codebleu
Pip compatible CodeBLEU metric implementation available for linux/macos/win
LiveCodeBench/LiveCodeBench
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of...
EdinburghNLP/code-docstring-corpus
Preprocessed Python functions and docstrings for automated code documentation (code2doc) and...
hendrycks/apps
APPS: Automated Programming Progress Standard (NeurIPS 2021)
solis-team/Hydra
[FSE 2026] Do Not Treat Code as Natural Language: Implications for Repository-Level Code...