Lucky-Wang-Chenlong/CodeSync

[ICML25] CODESYNC: Synchronizing Large Language Models with Dynamic Code Evolution at Scale

24
/ 100
Experimental

This project helps evaluate how well large language models can adapt to changes in software libraries. It automatically generates training sets and benchmarks by tracking API updates, finding real-world API uses, and synthesizing new API calls. Software engineers, AI researchers, and developers working with LLMs for code generation would find this valuable.

No commits in the last 6 months.

Use this if you need to automatically create robust datasets and benchmarks to test how well your large language models handle evolving code APIs.

Not ideal if you are looking for a tool to directly assist with code migration or refactoring for human developers.

LLM evaluation API versioning code generation dataset generation software engineering research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

25

Forks

2

Language

Python

License

Last pushed

Jul 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Lucky-Wang-Chenlong/CodeSync"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.