charliegerard/safe-space
Github action that checks the toxicity level of comments and PR reviews to help make repos safe spaces.
This GitHub Action automatically checks comments and pull request reviews for potentially toxic language. It takes in newly submitted text on issues and pull requests, analyzes it for toxicity, and outputs a flag or comment if the content exceeds a certain threshold. Developers and open-source project maintainers would use this to foster a more respectful and inclusive environment in their repositories.
472 stars. No commits in the last 6 months.
Use this if you want to automatically identify and flag potentially harmful comments in your GitHub repository to encourage healthier discussions.
Not ideal if you need a real-time content moderation solution, as there can be a delay of up to 40 seconds before results are displayed.
Stars
472
Forks
11
Language
JavaScript
License
GPL-3.0
Category
Last pushed
Jun 24, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/charliegerard/safe-space"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zake7749/DeepToxic
top 1% solution to toxic comment classification challenge on Kaggle.
DenisIndenbom/AntiToxicBot
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning...
aralroca/react-text-toxicity
Detect text toxicity in a simple way, using React. Based in a Keras model, loaded with Tensorflow.js.
bensonruan/Toxic-Comment-Classifier
Toxic-Comment-Classifier
kaelyx-dev/BlacklistedWordsBot
BWB is a Discord bot for handling auto moderation of words in a blacklist and then deleting...