tongye98/Awesome-Code-Benchmark

A comprehensive code domain benchmark review of LLM researches.

40
/ 100
Emerging

This is a curated collection of research benchmarks designed to evaluate how well large language models (LLMs) perform on various coding tasks. It brings together studies that assess LLMs on their ability to generate, review, translate, debug, and secure code across different programming scenarios. Researchers and practitioners in AI and software engineering can use this to understand the current capabilities and limitations of LLMs in code-related applications.

208 stars. No commits in the last 6 months.

Use this if you are researching or developing large language models and need to compare their performance against established metrics for coding tasks like code generation, debugging, or security.

Not ideal if you are a software developer looking for tools to write, debug, or manage code directly, as this is a resource for evaluating AI models, not a development environment.

AI-evaluation software-engineering-AI LLM-benchmarking code-analysis-AI AI-for-code
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

208

Forks

16

Language

License

MIT

Last pushed

Sep 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/tongye98/Awesome-Code-Benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.