jh-jeong/smoothmix

Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)

33
/ 100
Emerging

This project helps machine learning researchers and practitioners train image classification models that are robust to adversarial attacks. It takes raw image datasets and outputs highly reliable, "smoothed" classification models. The models created are less susceptible to subtle, malicious alterations in input images, making them more trustworthy in sensitive applications.

No commits in the last 6 months.

Use this if you need to build image classifiers that can withstand adversarial perturbations and provide provable guarantees of robustness, particularly in research or critical deployment scenarios.

Not ideal if you are looking for a general-purpose, out-of-the-box image classification tool without a specific focus on certified adversarial robustness, or if you don't have experience with machine learning model training.

adversarial-robustness image-classification machine-learning-research model-certification deep-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

21

Forks

3

Language

Roff

License

MIT

Last pushed

Sep 27, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jh-jeong/smoothmix"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.