jh-jeong/smoothmix
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)
This project helps machine learning researchers and practitioners train image classification models that are robust to adversarial attacks. It takes raw image datasets and outputs highly reliable, "smoothed" classification models. The models created are less susceptible to subtle, malicious alterations in input images, making them more trustworthy in sensitive applications.
No commits in the last 6 months.
Use this if you need to build image classifiers that can withstand adversarial perturbations and provide provable guarantees of robustness, particularly in research or critical deployment scenarios.
Not ideal if you are looking for a general-purpose, out-of-the-box image classification tool without a specific focus on certified adversarial robustness, or if you don't have experience with machine learning model training.
Stars
21
Forks
3
Language
Roff
License
MIT
Category
Last pushed
Sep 27, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jh-jeong/smoothmix"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...