rishub-tamirisa/tamper-resistance
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
This project helps AI developers and researchers protect open-weight large language models (LLMs) from malicious alterations. It takes an LLM and training data as input and produces a more secure version of the LLM that is resistant to fine-tuning attacks designed to make the model perform harmful tasks, while retaining its original capabilities. This is for AI security engineers, responsible AI researchers, or LLM developers concerned with model safety.
No commits in the last 6 months.
Use this if you are developing or deploying open-weight LLMs and need to ensure they cannot be easily tampered with or fine-tuned for unsafe behavior by adversaries.
Not ideal if you are looking for safeguards against input-based attacks or if you do not have access to the full model weights for training.
Stars
67
Forks
8
Language
Python
License
MIT
Category
Last pushed
Jun 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rishub-tamirisa/tamper-resistance"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Intelligent-CAT-Lab/PLTranslationEmpirical
Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large...
tsinghua-fib-lab/ANeurIPS2024_SPV-MIA
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via...
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
codessian/epistemic-confidence-layer
Model-agnostic trust protocol for calibrated, auditable AI