raphischer/strep

Initiating a paradigm shift in reporting and helping with making ML advances more considerate of sustainability and trustworthiness.

32
/ 100
Emerging

This project helps machine learning researchers and practitioners evaluate and compare different ML models based on their efficiency, trustworthiness, and performance. You input your ML experiment results, typically as a spreadsheet or database of model evaluations, and it generates visual reports that highlight the sustainability and reliability of these models. This tool is for anyone developing or deploying AI/ML solutions who needs to make informed decisions about model selection beyond just accuracy.

Available on PyPI.

Use this if you need to understand not just how well your machine learning models perform, but also how resource-efficient and trustworthy they are.

Not ideal if you are looking for a tool to train or develop new machine learning models, as its focus is on post-training evaluation and reporting.

machine-learning-evaluation AI-ethics model-benchmarking resource-efficiency trustworthy-AI
No License
Maintenance 10 / 25
Adoption 5 / 25
Maturity 17 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

Last pushed

Jan 27, 2026

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/raphischer/strep"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.