JonathanChavezTamales/llm-leaderboard

A comprehensive set of LLM benchmark scores and provider prices. (deprecated, read more in README)

49
/ 100
Emerging

This provides a comprehensive and community-driven resource for comparing large language models (LLMs). It compiles detailed information, including model parameters, pricing, performance metrics like throughput and latency, and standardized benchmark results across various tests. This is for anyone from researchers to business strategists who needs to evaluate and select the best LLMs for specific applications or understand the competitive landscape.

362 stars.

Use this if you need to compare different LLMs based on their technical specifications, benchmark performance, and pricing to make informed decisions for your projects.

Not ideal if you're looking for real-time API monitoring or advanced analytics beyond aggregated benchmark scores.

LLM evaluation AI model selection machine learning research AI strategy language model comparison
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

362

Forks

40

Language

JavaScript

License

Last pushed

Oct 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/JonathanChavezTamales/llm-leaderboard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.