elder-plinius/L1B3RT4S
TOTALLY HARMLESS LIBERATION PROMPTS FOR GOOD LIL AI'S!
This project offers a collection of 'jailbreak' prompts designed to overcome limitations or safety features in large AI models. By providing specific text inputs, you can encourage the AI to generate responses it might otherwise refuse. This tool is for researchers, ethicists, or AI enthusiasts who want to test the boundaries and capabilities of AI systems.
17,809 stars.
Use this if you need to bypass standard content filters or behavioral constraints of mainstream AI models for research, testing, or creative exploration.
Not ideal if you need an AI that consistently adheres to ethical guidelines, safety protocols, and standard content policies without deviation.
Stars
17,809
Forks
2,093
Language
—
License
AGPL-3.0
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/elder-plinius/L1B3RT4S"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.