arc53/llm-price-compass

This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient LLM GPU selections and cost-effective AI models. LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table

35
/ 100
Emerging

This tool helps you pick the most cost-effective graphics card (GPU) and cloud provider for running your Large Language Models (LLMs). It takes benchmark data from various GPUs and cloud services, then compares this against per-token costs from other LLM providers. AI engineers, machine learning operations (MLOps) specialists, and data scientists can use this to optimize their inference costs.

224 stars. No commits in the last 6 months.

Use this if you need to choose the most affordable GPU and cloud provider combination for deploying your Large Language Models.

Not ideal if you are looking for benchmarks on model training costs or specific fine-tuning expenses.

LLM deployment cloud cost optimization GPU selection AI model pricing MLOps
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

224

Forks

10

Language

TypeScript

License

MIT

Last pushed

Dec 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arc53/llm-price-compass"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.