RishabSA/interp-refusal-tokens
We study whether categorical refusal tokens enable controllable and interpretable safety behavior in language models.
This project helps AI safety researchers and machine learning engineers to better control how large language models (LLMs) refuse harmful or inappropriate prompts. It takes a fine-tuned Llama 3 8B model that uses specific refusal tokens (e.g., for violence, illegal activity) and provides methods to steer its refusal behavior during inference. The output is a more reliable and safer LLM that reduces accidental refusals on benign requests while increasing refusals for genuinely harmful content.
Use this if you are developing or evaluating LLM safety features and need more granular, interpretable control over a model's refusal behavior without extensive retraining.
Not ideal if you are looking for a plug-and-play solution for general LLM deployment without needing to understand or fine-tune refusal mechanisms.
Stars
7
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RishabSA/interp-refusal-tokens"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.