rishub-tamirisa/tamper-resistance

[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"

39
/ 100
Emerging

This project helps AI developers and researchers protect open-weight large language models (LLMs) from malicious alterations. It takes an LLM and training data as input and produces a more secure version of the LLM that is resistant to fine-tuning attacks designed to make the model perform harmful tasks, while retaining its original capabilities. This is for AI security engineers, responsible AI researchers, or LLM developers concerned with model safety.

No commits in the last 6 months.

Use this if you are developing or deploying open-weight LLMs and need to ensure they cannot be easily tampered with or fine-tuned for unsafe behavior by adversaries.

Not ideal if you are looking for safeguards against input-based attacks or if you do not have access to the full model weights for training.

AI safety LLM security model hardening responsible AI AI red-teaming
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

67

Forks

8

Language

Python

License

MIT

Last pushed

Jun 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rishub-tamirisa/tamper-resistance"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.