logpai/LogBench

A benchmark for logging statement generation.

38
/ 100
Emerging

This project helps software developers evaluate how well different AI models can automatically generate logging statements within code. It takes your existing code snippets and candidate logging statements as input and measures their quality and how well they generalize to modified code. Software engineers and researchers working on code quality, observability, or AI-assisted development would use this to compare and improve logging generation tools.

No commits in the last 6 months.

Use this if you are a software engineer or researcher needing to benchmark and compare different AI models for their ability to generate high-quality and effective logging statements in software.

Not ideal if you are looking for a tool to automatically insert logs into your codebase without needing to evaluate the underlying generation model.

software-development observability code-quality AI-assisted-development software-engineering-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

26

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Nov 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/logpai/LogBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.