LARK-AI-Lab/CodeScaler
The official repo for "CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models"
This tool helps developers who are training or using large language models for code generation tasks. It takes a coding problem description and candidate code solutions, then outputs a score indicating the quality of each solution. The primary users are AI/ML engineers and researchers working on code LLMs, who need to efficiently evaluate and improve their models.
Use this if you need to quickly and efficiently score the quality of generated code solutions without running time-consuming execution-based tests.
Not ideal if your primary goal is to run traditional unit tests for correctness on fully developed software, rather than evaluating AI-generated code.
Stars
32
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/LARK-AI-Lab/CodeScaler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aalok-sathe/surprisal
A unified interface for computing surprisal (log probabilities) from language models! Supports...
EvolvingLMMs-Lab/lmms-engine
A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.
FunnySaltyFish/Better-Ruozhiba
【逐条处理完成】人为审核+修改每一条的弱智吧精选问题QA数据集
reasoning-machines/pal
PaL: Program-Aided Language Models (ICML 2023)
microsoft/monitors4codegen
Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static...