sigeisler/reinforce-attacks-llms
REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective
This project helps AI safety researchers and red teamers evaluate how vulnerable large language models (LLMs) are to adversarial attacks. It takes in a set of instructions or prompts designed to elicit harmful responses from an LLM and outputs metrics showing how successful different attack methods are at making the LLM generate undesirable content. AI safety researchers and red teamers developing secure LLMs would use this.
No commits in the last 6 months.
Use this if you are an AI safety researcher or red teamer looking to test the robustness and refusal capabilities of LLMs against advanced adversarial prompting techniques.
Not ideal if you are an end-user of an LLM simply trying to get it to respond to a prompt, or if you are a developer integrating LLMs into an application.
Stars
23
Forks
4
Language
Python
License
MIT
Category
Last pushed
Feb 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sigeisler/reinforce-attacks-llms"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSB-NLP-Chang/SemanticSmooth
Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic...
DAMO-NLP-SG/multilingual-safety-for-LLMs
[ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"
yueliu1999/FlipAttack
[ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".
vicgalle/merging-self-critique-jailbreaks
"Merging Improves Self-Critique Against Jailbreak Attacks", code and models
wanglne/DELMAN
[ACL 2025 Findings] DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing