HOLYKEYZ/model-unfetter

The production engine for directional ablation. Unalign / remove models censorship efficiently on any hardware.

38
/ 100
Emerging

This tool helps AI safety researchers and red teamers remove refusal behaviors from Large Language Models (LLMs). It takes an existing LLM that has been trained to be cautious and modifies it to respond to all prompts without censoring or refusing. The output is a modified LLM that can be used for testing how models respond to potentially problematic queries.

Use this if you need to rigorously test the safety boundaries of an AI model by removing its built-in refusal mechanisms, especially for smaller models where standard methods fail.

Not ideal if you are looking for a tool to enhance or modify an LLM's helpfulness without altering its core safety alignments or if you are not engaged in AI safety research.

AI Safety Research Red Teaming LLM Alignment Model Evaluation AI Security
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Mar 19, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HOLYKEYZ/model-unfetter"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.