Xayan/Rules.txt
A rationalist ruleset for "debugging" LLMs, auditing their internal reasoning and uncovering biases; also a jailbreak.
This project offers a rationalist ruleset designed to help users get clearer, less biased, and more transparent outputs from large language models (LLMs). By providing a structured set of guidelines as input, you can prompt an LLM to explain its reasoning, expose underlying biases, and offer more direct answers, rather than evasive or 'sanitized' responses. This is ideal for anyone who relies on LLMs and is frustrated by their default cautiousness, moral hedging, or lack of accountability.
Use this if you frequently interact with LLMs for complex or controversial topics and are tired of receiving overly cautious, biased, or unhelpful answers.
Not ideal if you need a tool to bypass content filters for harmful outputs or to completely eliminate LLM hallucinations, as it focuses on reasoning and transparency, not unrestricted generation.
Stars
80
Forks
6
Language
—
License
—
Category
Last pushed
Nov 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Xayan/Rules.txt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...