boyiwei/alignment-attribution-code

[ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications

44
/ 100
Emerging

This tool helps AI safety researchers and model developers evaluate the robustness of safety features in large language models like Llama 2. It takes a pre-trained, safety-aligned LLM and a dataset of safety-critical prompts as input. The output helps users understand how easily a model's safety alignment can be degraded by making small, targeted modifications to its internal structure.

No commits in the last 6 months.

Use this if you need to rigorously test and understand the brittleness of safety alignment in your large language models, specifically by analyzing the impact of pruning or low-rank modifications on their safety performance and general utility.

Not ideal if you are looking for a general-purpose model pruning tool to optimize inference speed or reduce model size, without a primary focus on evaluating safety alignment.

AI Safety Large Language Models Model Evaluation Alignment Research Interpretability
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

89

Forks

17

Language

Python

License

MIT

Last pushed

Mar 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/boyiwei/alignment-attribution-code"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.