RyanLucas3/HR_Neural_Networks
Certified robustness of deep neural networks
This project helps machine learning practitioners build neural networks that are provably resistant to common data issues. It takes your existing deep learning model and training data, and outputs a more robustly trained model. This is for anyone who deploys machine learning models in environments where data quality or security is a concern, such as in finance, healthcare, or security applications.
No commits in the last 6 months. Available on PyPI.
Use this if you need to guarantee that your machine learning models will perform reliably even when faced with corrupted training data or adversarial attacks on test data.
Not ideal if your primary concern is solely achieving the highest possible accuracy on perfectly clean data, or if you do not work with deep neural networks.
Stars
19
Forks
1
Language
Python
License
MIT
Category
Last pushed
Aug 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RyanLucas3/HR_Neural_Networks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...