dengls24/LLM-para

Analyze LLM inference: FLOPs, memory, Roofline model. Supports GQA, MoE, MLA, RoPE, SwiGLU. 19 models × 20+ hardware platforms.

32
/ 100
Emerging

This tool helps hardware architects and machine learning engineers understand the performance, energy usage, and cost of running large language models (LLMs) on different hardware. You input details about an LLM (like LLaMA-3) and a hardware platform (like an NVIDIA H100 GPU), and it calculates metrics like FLOPs, memory bottlenecks, throughput, and even carbon footprint. This is for professionals who need to optimize LLM inference deployments.

Use this if you need to quantitatively compare and select the most efficient hardware and LLM architecture configurations for deploying LLMs, considering performance, energy, and cost.

Not ideal if you are looking for a tool to train LLMs, fine-tune models, or evaluate their natural language understanding capabilities.

LLM deployment hardware optimization MLOps inference engineering data center efficiency
No License No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Mar 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dengls24/LLM-para"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.