aimagelab/safe-clip

Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models. ECCV 2024

16
/ 100
Experimental

This project offers an enhanced AI model that helps ensure responsible content generation and retrieval in visual AI applications. It takes text descriptions or images as input and produces corresponding images or text, making sure the outputs are free from inappropriate content. It's designed for anyone building or deploying AI systems where preventing Not Safe For Work (NSFW) content is critical.

No commits in the last 6 months.

Use this if you need to build or use AI models for generating images from text, or describing images with text, and must guarantee that the outputs are free from explicit, harmful, or sensitive content.

Not ideal if your application requires the ability to generate or process explicit or sensitive content, or if you are working with non-visual AI tasks.

content moderation ethical AI image generation AI safety text-to-image
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

67

Forks

Language

Python

License

Last pushed

Aug 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/aimagelab/safe-clip"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.