ogencoglu/fair_cyberbullying_detection

Source code and models for the paper "Cyberbullying Detection with Fairness Constraints". IEEE Internet Computing, 2020

37
/ 100
Emerging

This project helps social media platforms and online communities detect cyberbullying in user-generated text, while ensuring the detection models are fair across different identity groups. You input text data, optionally labeled with identity attributes, and the system outputs classifications of cyberbullying that minimize bias. Community moderators, platform safety teams, or trust & safety data scientists would use this.

No commits in the last 6 months.

Use this if you need to build or evaluate cyberbullying detection systems that are specifically designed to be fair and unbiased towards different user groups.

Not ideal if you are looking for an off-the-shelf, plug-and-play cyberbullying detection API without needing to understand or customize the underlying fairness mechanisms.

content-moderation online-safety social-media-analysis hate-speech-detection algorithmic-fairness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

19

Forks

5

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 25, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ogencoglu/fair_cyberbullying_detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.