Orion-zhen/abliteration

Make abliterated models with transformers, easy and fast

54
/ 100
Established

This tool helps developers and researchers modify Large Language Models (LLMs) to reduce their tendency to refuse user inputs, especially for sensitive topics. You provide a pre-trained transformer-based LLM, along with 'harmful' and 'harmless' prompt examples. The output is a modified version of your LLM that is less likely to explicitly refuse queries, while aiming to preserve other capabilities. This is for AI practitioners working with custom or open-source LLMs.

128 stars.

Use this if you need to fine-tune an LLM to be less overtly restrictive or refusing in its responses, without completely uncensoring it, and you understand the implications of model modification.

Not ideal if you are looking for a complete uncensoring solution for an LLM or if you don't have the technical expertise to work with transformer models directly.

LLM fine-tuning AI model alignment generative AI development model behavior modification
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

128

Forks

54

Language

Python

License

GPL-3.0

Last pushed

Dec 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Orion-zhen/abliteration"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.