RyanLucas3/HR_Neural_Networks

Certified robustness of deep neural networks

36
/ 100
Emerging

This project helps machine learning practitioners build neural networks that are provably resistant to common data issues. It takes your existing deep learning model and training data, and outputs a more robustly trained model. This is for anyone who deploys machine learning models in environments where data quality or security is a concern, such as in finance, healthcare, or security applications.

No commits in the last 6 months. Available on PyPI.

Use this if you need to guarantee that your machine learning models will perform reliably even when faced with corrupted training data or adversarial attacks on test data.

Not ideal if your primary concern is solely achieving the highest possible accuracy on perfectly clean data, or if you do not work with deep neural networks.

data-security model-robustness adversarial-defense machine-learning-auditing reliable-AI
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Python

License

MIT

Last pushed

Aug 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RyanLucas3/HR_Neural_Networks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.