OPTML-Group/Unlearn-Saliency

[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu

56
/ 100
Established

This helps AI model developers and data privacy officers selectively remove specific data, concepts, or categories from trained image classification or generation models. You provide your existing model and the information you want to erase, and it outputs a revised model that no longer exhibits knowledge of the forgotten content. This is for AI practitioners managing model compliance and safety.

143 stars.

Use this if you need to quickly and effectively remove sensitive information, specific classes, or unwanted concepts from your trained image AI models without fully retraining them from scratch.

Not ideal if your AI model does not involve image data or if you need to unlearn information from natural language processing (NLP) models specifically.

AI model compliance data privacy image content moderation machine unlearning AI safety
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

143

Forks

29

Language

Python

License

MIT

Last pushed

Feb 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/OPTML-Group/Unlearn-Saliency"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.