thunlp/OpenAttack

An Open-Source Package for Textual Adversarial Attack.

58
/ 100
Established

This tool helps machine learning engineers and researchers assess the weaknesses of natural language processing (NLP) models. You provide an NLP model (like a sentiment analyzer) and text data, and it generates 'adversarial examples' — slightly altered texts designed to trick the model — along with evaluation metrics. This is useful for anyone building or evaluating text-based AI systems who needs to understand their model's vulnerabilities.

772 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to test how robust your NLP model is against subtle text changes, develop new adversarial attack methods, or enhance your model's resistance through adversarial training.

Not ideal if you are looking for a general-purpose NLP library for tasks like text classification or named entity recognition, or if you are not working with adversarial attacks or model robustness.

NLP-model-evaluation textual-robustness adversarial-testing ML-security text-AI-development
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 23 / 25

How are scores calculated?

Stars

772

Forks

130

Language

Python

License

MIT

Last pushed

Jul 20, 2023

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/thunlp/OpenAttack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.