arc53/llm-price-compass
This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient LLM GPU selections and cost-effective AI models. LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table
This tool helps you pick the most cost-effective graphics card (GPU) and cloud provider for running your Large Language Models (LLMs). It takes benchmark data from various GPUs and cloud services, then compares this against per-token costs from other LLM providers. AI engineers, machine learning operations (MLOps) specialists, and data scientists can use this to optimize their inference costs.
224 stars. No commits in the last 6 months.
Use this if you need to choose the most affordable GPU and cloud provider combination for deploying your Large Language Models.
Not ideal if you are looking for benchmarks on model training costs or specific fine-tuning expenses.
Stars
224
Forks
10
Language
TypeScript
License
MIT
Category
Last pushed
Dec 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arc53/llm-price-compass"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
isEmmanuelOlowe/llm-cost-estimator
Estimating hardware and cloud costs of LLMs and transformer projects
WilliamJlvt/llm_price_scraper
A simple Python Scraper to retrieve pricing information for Large Language Models (LLMs) from an...
nuxdie/ai-pricing
Compare AI model pricing and performance in a simple interactive web app.
FareedKhan-dev/save-llm-api-cost
A straightforward method to reduce your LLM inference API costs and token usage.
paradite/llm-info
Information on LLM models, context window token limit, output token limit, pricing and more.