LetterLiGo/SafeGen_CCS2024

[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models

41
/ 100
Emerging

This project helps anyone working with Text-to-Image (T2I) AI models to prevent them from generating sexually explicit content. You provide your existing T2I model and a dataset of 'nude' and 'mosaic' image pairs, and it outputs a modified T2I model that is much less likely to create unsafe images. This is for AI developers, researchers, or product managers responsible for the ethical deployment of generative AI.

138 stars. No commits in the last 6 months.

Use this if you need to train a robust filter for your Text-to-Image model to significantly reduce the generation of sexually explicit content.

Not ideal if you're looking for a simple plug-and-play content moderation solution without needing to train or fine-tune an AI model yourself.

AI safety content moderation generative AI responsible AI image generation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

138

Forks

13

Language

Python

License

Apache-2.0

Last pushed

Jul 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/LetterLiGo/SafeGen_CCS2024"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.