OPTML-Group/Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
This helps AI model developers and data privacy officers selectively remove specific data, concepts, or categories from trained image classification or generation models. You provide your existing model and the information you want to erase, and it outputs a revised model that no longer exhibits knowledge of the forgotten content. This is for AI practitioners managing model compliance and safety.
143 stars.
Use this if you need to quickly and effectively remove sensitive information, specific classes, or unwanted concepts from your trained image AI models without fully retraining them from scratch.
Not ideal if your AI model does not involve image data or if you need to unlearn information from natural language processing (NLP) models specifically.
Stars
143
Forks
29
Language
Python
License
MIT
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/OPTML-Group/Unlearn-Saliency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Shilin-LU/VINE
[ICLR 2025] "Robust Watermarking Using Generative Priors Against Image Editing: From...
WindVChen/DiffAttack
An unrestricted attack based on diffusion models that can achieve both good transferability and...
koninik/DiffusionPen
Official PyTorch Implementation of "DiffusionPen: Towards Controlling the Style of Handwritten...
Wuyxin/DISC
(ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation
bytedance/LatentUnfold
Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training