charliegerard/safe-space

Github action that checks the toxicity level of comments and PR reviews to help make repos safe spaces.

34
/ 100
Emerging

This GitHub Action automatically checks comments and pull request reviews for potentially toxic language. It takes in newly submitted text on issues and pull requests, analyzes it for toxicity, and outputs a flag or comment if the content exceeds a certain threshold. Developers and open-source project maintainers would use this to foster a more respectful and inclusive environment in their repositories.

472 stars. No commits in the last 6 months.

Use this if you want to automatically identify and flag potentially harmful comments in your GitHub repository to encourage healthier discussions.

Not ideal if you need a real-time content moderation solution, as there can be a delay of up to 40 seconds before results are displayed.

community-moderation developer-relations open-source-management code-review team-communication
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

472

Forks

11

Language

JavaScript

License

GPL-3.0

Last pushed

Jun 24, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/charliegerard/safe-space"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.