RishabSA/interp-refusal-tokens

We study whether categorical refusal tokens enable controllable and interpretable safety behavior in language models.

38
/ 100
Emerging

This project helps AI safety researchers and machine learning engineers to better control how large language models (LLMs) refuse harmful or inappropriate prompts. It takes a fine-tuned Llama 3 8B model that uses specific refusal tokens (e.g., for violence, illegal activity) and provides methods to steer its refusal behavior during inference. The output is a more reliable and safer LLM that reduces accidental refusals on benign requests while increasing refusals for genuinely harmful content.

Use this if you are developing or evaluating LLM safety features and need more granular, interpretable control over a model's refusal behavior without extensive retraining.

Not ideal if you are looking for a plug-and-play solution for general LLM deployment without needing to understand or fine-tune refusal mechanisms.

AI Safety Large Language Models Content Moderation Model Alignment Machine Learning Engineering
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Feb 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RishabSA/interp-refusal-tokens"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.