isEmmanuelOlowe/llm-cost-estimator
Estimating hardware and cloud costs of LLMs and transformer projects
This tool helps machine learning practitioners quickly determine if a large language model (LLM) will fit on a specific GPU setup and estimate its running cost. You input a model from Hugging Face, and it outputs detailed memory usage, suitable GPU recommendations, performance projections, and cloud cost estimates. It's designed for anyone deploying or evaluating LLMs for various applications.
Use this if you need to evaluate the hardware feasibility and budget implications of running a large language model, whether for training or inference.
Not ideal if you require exact, real-world cost and performance figures without any analytical approximations, as results are indicative and should be validated with actual workloads.
Stars
21
Forks
6
Language
TypeScript
License
MIT
Category
Last pushed
Jan 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/isEmmanuelOlowe/llm-cost-estimator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
WilliamJlvt/llm_price_scraper
A simple Python Scraper to retrieve pricing information for Large Language Models (LLMs) from an...
nuxdie/ai-pricing
Compare AI model pricing and performance in a simple interactive web app.
FareedKhan-dev/save-llm-api-cost
A straightforward method to reduce your LLM inference API costs and token usage.
paradite/llm-info
Information on LLM models, context window token limit, output token limit, pricing and more.
arc53/llm-price-compass
This project collects GPU benchmarks from various cloud providers and compares them to fixed per...