ddmms/ml-peg

ML Performance and Extrapolation Guide

63
/ 100
Established

ML-PEG is an open-source framework for evaluating and comparing machine learning models. It helps researchers and practitioners assess how different models perform across various datasets and computational resources. You provide your experimental results, and ML-PEG gives you insights into model efficiency and scalability.

Available on PyPI.

Use this if you need to systematically benchmark different machine learning models and understand their performance characteristics as data or resources change.

Not ideal if you are looking for a tool to train or deploy machine learning models directly, as ML-PEG focuses on evaluation and comparison.

ML model evaluation scientific computing computational research benchmark analysis performance extrapolation
Maintenance 10 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 21 / 25

How are scores calculated?

Stars

33

Forks

37

Language

Python

License

GPL-3.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Dependencies

12

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ddmms/ml-peg"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.