BBVA/mercury-robust

mercury-robust is a framework to perform robust testing on ML models and datasets. It provides a collection of test that are easy to configure and helpful to guarantee robustness in your ML processes.

31
/ 100
Emerging

This tool helps data scientists and ML engineers ensure their machine learning models and the data they use are reliable and fair in production. You provide your dataset and trained models, and it performs various checks to identify issues like data drift, unfair performance across different user groups, or overly complex models. The output alerts you to potential problems before they impact real-world applications.

No commits in the last 6 months. Available on PyPI.

Use this if you need to systematically test your machine learning models and datasets to guarantee their performance, fairness, and robustness in live environments, especially in sensitive fields like finance or healthcare.

Not ideal if you are looking for a tool to develop or train machine learning models, as its primary focus is on testing their reliability post-development.

model-validation data-quality ML-operations fairness-auditing production-ML
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

20

Forks

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 26, 2025

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BBVA/mercury-robust"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.