alexjfoote/reetoolbox

Toolbox for measuring adversarial robustness to many transforms

45
/ 100
Emerging

When building machine learning models for image analysis, this tool helps you understand how robust your model is to real-world image variations like different staining or lighting. It takes your trained image classification model and a dataset, then generates new, subtly altered versions of your images that are specifically designed to fool your model. The output shows you exactly how vulnerable your model is to these challenging but realistic changes, helping you assess its reliability in practical applications.

No commits in the last 6 months. Available on PyPI.

Use this if you need to reliably assess how well your image classification model performs when faced with subtle, real-world variations in input data that could degrade its accuracy.

Not ideal if your primary goal is to interpret model decisions or improve model fairness, as this tool focuses specifically on adversarial robustness to image transformations.

medical-imaging pathology image-classification model-validation quality-control
Stale 6m No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

19

Forks

3

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Apr 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alexjfoote/reetoolbox"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.