TIGER-AI-Lab/AceCoder

The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]

31
/ 100
Emerging

This project helps AI researchers and practitioners improve the performance of large language models (LLMs) specifically for code generation tasks. It takes an existing code generation LLM and a dataset of coding problems, then automatically generates high-quality test cases. These tests are used to train a 'reward model' which, in turn, fine-tunes the LLM to produce more accurate and robust code solutions.

No commits in the last 6 months.

Use this if you are developing or fine-tuning LLMs for coding and need a fully automated way to create extensive and reliable test cases for reinforcement learning.

Not ideal if you are a software developer looking for a tool to generate unit tests for your existing code, or if you primarily work with natural language generation LLMs.

AI model training code generation LLM fine-tuning test data synthesis machine learning research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

99

Forks

3

Language

Python

License

MIT

Last pushed

Apr 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TIGER-AI-Lab/AceCoder"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.