ejhusom/MELODI

Use local Large Language Models (LLMs) while monitoring energy usage.

20
/ 100
Experimental

This project helps you understand the environmental cost of running local Large Language Models (LLMs) by measuring the energy consumed during the inference process. You feed in prompts to your chosen LLM and it outputs detailed energy consumption data. This is for researchers, MLOps engineers, or anyone optimizing the efficiency of AI models.

Use this if you need to quantify the energy footprint of specific LLM inference tasks on your local machine.

Not ideal if you are looking for an LLM itself or a tool to simply track general system power usage without specific LLM inference monitoring.

AI-ethics sustainable-AI LLM-operations computational-efficiency model-optimization
No License No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Jupyter Notebook

License

Last pushed

Dec 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ejhusom/MELODI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.