raphischer/strep
Initiating a paradigm shift in reporting and helping with making ML advances more considerate of sustainability and trustworthiness.
This project helps machine learning researchers and practitioners evaluate and compare different ML models based on their efficiency, trustworthiness, and performance. You input your ML experiment results, typically as a spreadsheet or database of model evaluations, and it generates visual reports that highlight the sustainability and reliability of these models. This tool is for anyone developing or deploying AI/ML solutions who needs to make informed decisions about model selection beyond just accuracy.
Available on PyPI.
Use this if you need to understand not just how well your machine learning models perform, but also how resource-efficient and trustworthy they are.
Not ideal if you are looking for a tool to train or develop new machine learning models, as its focus is on post-training evaluation and reporting.
Stars
11
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/raphischer/strep"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
streamlit/streamlit
Streamlit — A faster way to build and share data apps.
pycaret/pycaret
An open-source, low-code machine learning library in Python
rio-labs/rio
WebApps in pure Python. No JavaScript, HTML and CSS needed
MarcSkovMadsen/awesome-streamlit
The purpose of this project is to share knowledge on how awesome Streamlit is and can be
jrieke/streamlit-analytics
👀 Track & visualize user interactions with your streamlit app