ml-research/localizing_memorization_in_diffusion_models

[NeurIPS 2024] Source code for our paper "Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models".

29
/ 100
Experimental

This project helps AI safety researchers and machine learning engineers identify and address privacy concerns in diffusion models. It takes a trained diffusion model and a set of prompts as input. The output is a list of specific neurons responsible for memorizing individual training images, allowing you to deactivate them to prevent sensitive or copyrighted data from being reproduced. This tool is for those who develop, deploy, or audit large image generation models.

No commits in the last 6 months.

Use this if you need to pinpoint exactly which parts of a diffusion model are causing it to reproduce training data, enhancing privacy and mitigating intellectual property risks.

Not ideal if you are looking for a simple plug-and-play solution to prevent all memorization without understanding the underlying neural network components.

AI safety diffusion models data privacy model interpretability intellectual property
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

Python

License

MIT

Last pushed

Jul 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ml-research/localizing_memorization_in_diffusion_models"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.