M1chlCZ/nsfw_sherlock

Simple drop in API to determine if image is NSFW using TensorFlow

36
/ 100
Emerging

This tool helps content moderators, platform administrators, and community managers automatically identify inappropriate images and text. You input an image, and it tells you if the picture or any text within it contains Not Safe For Work (NSFW) content. It can also provide specific labels like 'porn' or 'hentai' with confidence scores, allowing for fine-tuned content filtering.

No commits in the last 6 months.

Use this if you need an automated way to screen user-submitted images or content for objectionable material before it goes live on your platform.

Not ideal if you require human-level nuanced understanding of context or highly subjective content moderation that goes beyond explicit imagery or text.

content moderation community management platform safety image screening user-generated content
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

13

Forks

4

Language

Go

License

AGPL-3.0

Last pushed

Aug 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/M1chlCZ/nsfw_sherlock"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.