glincker/glin-profanity

Open-source ML-powered profanity filter with TensorFlow.js toxicity detection, leetspeak & Unicode obfuscation resistance. 21M+ ops/sec, 23 languages, React hooks, LRU caching. npm & PyPI.

60
/ 100
Established

This tool helps online community managers, content moderators, and customer support teams automatically detect and filter inappropriate language from user-generated content. You input text from comments, chat messages, or social media posts, and it tells you if profanity is present, even if users try to bypass filters with leetspeak or special characters. It can also replace or censor the detected words.

Available on PyPI and npm.

Use this if you need a robust, multilingual solution to keep your online platforms clean from offensive language and toxicity, especially if users frequently attempt to evade detection.

Not ideal if you only need a very basic profanity check against a simple word list without any advanced detection for evasive tactics or multiple languages.

content-moderation online-community-management social-media-management customer-service brand-reputation
Maintenance 10 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

44

Forks

9

Language

TypeScript

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/glincker/glin-profanity"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.