Raibows/RMLM

RMLM: A Flexible Defense Framework for Proactively Mitigating Word-level Adversarial Attacks, ACL 2023.

13
/ 100
Experimental

This project helps machine learning engineers and NLP researchers defend their natural language processing models against "adversarial attacks." These attacks subtly change a few words in an input text to trick a model into making wrong predictions. This tool takes your existing text classification or sentiment analysis model and your dataset, then trains a defensive layer to make your model more robust and reliable against such manipulative inputs.

No commits in the last 6 months.

Use this if you are developing or deploying NLP models and are concerned about their vulnerability to word-level adversarial attacks.

Not ideal if you are looking for defenses against non-text-based adversarial attacks or concept drift, or if you need a plug-and-play solution without fine-tuning.

natural-language-processing machine-learning-security text-classification adversarial-robustness sentiment-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Last pushed

Dec 03, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Raibows/RMLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.