minhd-vu/toxicity-filter
Natural language processing API to detect toxic chat.
This project helps online communities and gaming platforms automatically detect and flag toxic chat messages in real time. It takes in chat messages as input and provides a toxicity score between 0 and 1, allowing platforms to filter or censor content. It's designed for community managers, platform administrators, and game developers who want to foster safer online environments.
No commits in the last 6 months.
Use this if you need an API to assess the toxicity of user-generated text, like chat messages or comments, to help moderate your online community.
Not ideal if you need a pre-built, production-ready system that automatically censors specific words within a sentence or is already deployed as a bot.
Stars
13
Forks
1
Language
Python
License
MIT
Category
Last pushed
Dec 10, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/minhd-vu/toxicity-filter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built...
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge