SeekingDream/DyCodeEval

Official repository of the ICML2025 paper “Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination”

36
/ 100
Emerging

This project offers a new way to test how well code-generating AI models (Code LLMs) can actually solve programming problems, even if they've seen similar code during training. It takes existing coding challenges like HumanEval or MBPP and dynamically rewrites them into new, diverse versions. The output is a more reliable assessment of a Code LLM's true reasoning ability, free from the influence of memorized data, for AI researchers and practitioners evaluating these models.

255 stars.

Use this if you need to rigorously evaluate the reasoning capabilities of Code LLMs and want to avoid misleading results caused by models having already seen the test data.

Not ideal if you are looking for a tool to develop or fine-tune Code LLMs, as its primary purpose is evaluation rather than model creation.

AI model evaluation Code LLM benchmarking Generative AI testing Data contamination Reasoning assessment
No License No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 7 / 25
Community 13 / 25

How are scores calculated?

Stars

255

Forks

22

Language

Python

License

Last pushed

Dec 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SeekingDream/DyCodeEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.