aimagelab/safe-clip
Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models. ECCV 2024
This project offers an enhanced AI model that helps ensure responsible content generation and retrieval in visual AI applications. It takes text descriptions or images as input and produces corresponding images or text, making sure the outputs are free from inappropriate content. It's designed for anyone building or deploying AI systems where preventing Not Safe For Work (NSFW) content is critical.
No commits in the last 6 months.
Use this if you need to build or use AI models for generating images from text, or describing images with text, and must guarantee that the outputs are free from explicit, harmful, or sensitive content.
Not ideal if your application requires the ability to generate or process explicit or sensitive content, or if you are working with non-visual AI tasks.
Stars
67
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/aimagelab/safe-clip"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model