OskarsEzerins/llm-benchmarks
Popular LLM benchmarks for ruby code generation
This project helps evaluate and compare how well different AI models generate or fix Ruby code. You can input various AI models and benchmark them against challenges like optimizing code for speed and memory, or debugging broken Ruby programs. The outcome is a detailed report and an interactive website showing model rankings and metrics, useful for anyone choosing or developing AI models for Ruby programming tasks.
Use this if you need to objectively assess the performance, accuracy, and efficiency of AI models for generating or debugging Ruby code.
Not ideal if you are looking to benchmark AI models for tasks outside of Ruby code generation and fixing, or if you need a tool for a different programming language.
Stars
75
Forks
6
Language
Ruby
License
MIT
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OskarsEzerins/llm-benchmarks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems