sigeisler/reinforce-attacks-llms

REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective

36
/ 100
Emerging

This project helps AI safety researchers and red teamers evaluate how vulnerable large language models (LLMs) are to adversarial attacks. It takes in a set of instructions or prompts designed to elicit harmful responses from an LLM and outputs metrics showing how successful different attack methods are at making the LLM generate undesirable content. AI safety researchers and red teamers developing secure LLMs would use this.

No commits in the last 6 months.

Use this if you are an AI safety researcher or red teamer looking to test the robustness and refusal capabilities of LLMs against advanced adversarial prompting techniques.

Not ideal if you are an end-user of an LLM simply trying to get it to respond to a prompt, or if you are a developer integrating LLMs into an application.

AI safety red teaming LLM security adversarial AI model evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

23

Forks

4

Language

Python

License

MIT

Last pushed

Feb 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sigeisler/reinforce-attacks-llms"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.