yul091/DGSlow

Codebase for the ACL 2023 paper: White-Box Multi-Objective Adversarial Attack on Dialogue Generation.

14
/ 100
Experimental

This tool helps researchers and developers who are building or evaluating conversational AI systems. It allows you to test the robustness of dialogue generation models by generating adversarial examples that expose their weaknesses. You input a pre-trained dialogue model and a dataset, and it outputs examples of how to 'trick' the model into generating undesirable responses.

No commits in the last 6 months.

Use this if you need to rigorously test the resilience and safety of your dialogue generation models against various types of malicious inputs.

Not ideal if you are looking to improve the performance or accuracy of your dialogue model, as this tool focuses on stress-testing its vulnerabilities.

conversational-ai dialogue-systems model-robustness ai-safety natural-language-generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

Last pushed

Dec 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/yul091/DGSlow"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.