isEmmanuelOlowe/llm-cost-estimator

Estimating hardware and cloud costs of LLMs and transformer projects

48
/ 100
Emerging

This tool helps machine learning practitioners quickly determine if a large language model (LLM) will fit on a specific GPU setup and estimate its running cost. You input a model from Hugging Face, and it outputs detailed memory usage, suitable GPU recommendations, performance projections, and cloud cost estimates. It's designed for anyone deploying or evaluating LLMs for various applications.

Use this if you need to evaluate the hardware feasibility and budget implications of running a large language model, whether for training or inference.

Not ideal if you require exact, real-world cost and performance figures without any analytical approximations, as results are indicative and should be validated with actual workloads.

Machine Learning Operations GPU Resource Planning Cloud Cost Management Large Language Model Deployment AI Project Scoping
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

21

Forks

6

Language

TypeScript

License

MIT

Last pushed

Jan 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/isEmmanuelOlowe/llm-cost-estimator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.