SamaKhan35/Automated-Content-Moderation

Automated Content Moderation System: A machine learning project designed to categorise textual content into appropriate or inappropriate. Preprocessing of the data, LSTM and BERT model training and evaluation, and a Flask web application for real-time content classification are included in this project. Developed as part of University research.

27
/ 100
Experimental

This project helps online communities and content platforms automatically identify and filter harmful text and images. You provide it with text, or upload an image containing text, and it tells you if the content is appropriate or inappropriate for public display. This is designed for community managers, social media platforms, and website administrators who need to maintain a safe and respectful environment.

No commits in the last 6 months.

Use this if you manage user-generated content and need an automated way to screen for offensive or inappropriate material before it goes live.

Not ideal if you require moderation for very nuanced content, audio, or video, as this system focuses solely on text within text fields and images.

content moderation online community management brand safety user-generated content platform administration
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

8

Forks

4

Language

Jupyter Notebook

License

Last pushed

Apr 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SamaKhan35/Automated-Content-Moderation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.