HOLYKEYZ/model-unfetter
The production engine for directional ablation. Unalign / remove models censorship efficiently on any hardware.
This tool helps AI safety researchers and red teamers remove refusal behaviors from Large Language Models (LLMs). It takes an existing LLM that has been trained to be cautious and modifies it to respond to all prompts without censoring or refusing. The output is a modified LLM that can be used for testing how models respond to potentially problematic queries.
Use this if you need to rigorously test the safety boundaries of an AI model by removing its built-in refusal mechanisms, especially for smaller models where standard methods fail.
Not ideal if you are looking for a tool to enhance or modify an LLM's helpfulness without altering its core safety alignments or if you are not engaged in AI safety research.
Stars
19
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HOLYKEYZ/model-unfetter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
p-e-w/heretic
Fully automatic censorship removal for language models
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.