minhd-vu/toxicity-filter

Natural language processing API to detect toxic chat.

27
/ 100
Experimental

This project helps online communities and gaming platforms automatically detect and flag toxic chat messages in real time. It takes in chat messages as input and provides a toxicity score between 0 and 1, allowing platforms to filter or censor content. It's designed for community managers, platform administrators, and game developers who want to foster safer online environments.

No commits in the last 6 months.

Use this if you need an API to assess the toxicity of user-generated text, like chat messages or comments, to help moderate your online community.

Not ideal if you need a pre-built, production-ready system that automatically censors specific words within a sentence or is already deployed as a bot.

online-moderation community-management chat-filtering gaming-platforms content-safety
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

Python

License

MIT

Last pushed

Dec 10, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/minhd-vu/toxicity-filter"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.