p-e-w/heretic
Fully automatic censorship removal for language models
Heretic helps you modify transformer-based language models to remove their built-in 'safety alignment' or censorship. You provide an existing language model, and it produces a new version that answers prompts it previously refused, while maintaining its original intelligence. This is ideal for anyone who needs to use or experiment with uncensored language models.
12,369 stars. Actively maintained with 18 commits in the last 30 days.
Use this if you need to quickly and automatically create uncensored versions of transformer-based language models without complex configuration or manual fine-tuning.
Not ideal if you are working with SSMs, hybrid models, or models with inhomogeneous layers, as these are not yet supported.
Stars
12,369
Forks
1,273
Language
Python
License
AGPL-3.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
18
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/p-e-w/heretic"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related models
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
locuslab/wanda
A simple and effective LLM pruning approach.
tommasomncttn/mergenetic
Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).