BBVA/mercury-robust
mercury-robust is a framework to perform robust testing on ML models and datasets. It provides a collection of test that are easy to configure and helpful to guarantee robustness in your ML processes.
This tool helps data scientists and ML engineers ensure their machine learning models and the data they use are reliable and fair in production. You provide your dataset and trained models, and it performs various checks to identify issues like data drift, unfair performance across different user groups, or overly complex models. The output alerts you to potential problems before they impact real-world applications.
No commits in the last 6 months. Available on PyPI.
Use this if you need to systematically test your machine learning models and datasets to guarantee their performance, fairness, and robustness in live environments, especially in sensitive fields like finance or healthcare.
Not ideal if you are looking for a tool to develop or train machine learning models, as its primary focus is on testing their reliability post-development.
Stars
20
Forks
—
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Feb 26, 2025
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BBVA/mercury-robust"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...