TIGER-AI-Lab/AceCoder
The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]
This project helps AI researchers and practitioners improve the performance of large language models (LLMs) specifically for code generation tasks. It takes an existing code generation LLM and a dataset of coding problems, then automatically generates high-quality test cases. These tests are used to train a 'reward model' which, in turn, fine-tunes the LLM to produce more accurate and robust code solutions.
No commits in the last 6 months.
Use this if you are developing or fine-tuning LLMs for coding and need a fully automated way to create extensive and reliable test cases for reinforcement learning.
Not ideal if you are a software developer looking for a tool to generate unit tests for your existing code, or if you primarily work with natural language generation LLMs.
Stars
99
Forks
3
Language
Python
License
MIT
Category
Last pushed
Apr 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TIGER-AI-Lab/AceCoder"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct