ddmms/ml-peg
ML Performance and Extrapolation Guide
ML-PEG is an open-source framework for evaluating and comparing machine learning models. It helps researchers and practitioners assess how different models perform across various datasets and computational resources. You provide your experimental results, and ML-PEG gives you insights into model efficiency and scalability.
Available on PyPI.
Use this if you need to systematically benchmark different machine learning models and understand their performance characteristics as data or resources change.
Not ideal if you are looking for a tool to train or deploy machine learning models directly, as ML-PEG focuses on evaluation and comparison.
Stars
33
Forks
37
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
12
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ddmms/ml-peg"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
deepmodeling/deepmd-kit
A deep learning package for many-body potential energy representation and molecular dynamics
chemprop/chemprop
Message Passing Neural Networks for Molecule Property Prediction
mir-group/nequip
NequIP is a code for building E(3)-equivariant interatomic potentials
Acellera/moleculekit
MoleculeKit: Your favorite molecule manipulation kit
CederGroupHub/chgnet
Pretrained universal neural network potential for charge-informed atomistic modeling...