davidegat/happy-prompts
Utterly unelegant prompts for local LLMs, with scary results.
This project provides a collection of prompts and techniques to test and understand how large language models (LLMs) behave under unusual conditions. It gives you specific text inputs designed to bypass an LLM's safety features or reveal its internal settings. Anyone interested in AI safety, ethical hacking, or simply exploring the boundaries of what current LLMs can do would find this useful.
No commits in the last 6 months.
Use this if you are a red teamer, security researcher, or AI ethics professional looking to identify vulnerabilities and unexpected behaviors in large language models.
Not ideal if you are looking for prompts to improve routine LLM performance or for general application development.
Stars
24
Forks
6
Language
—
License
—
Category
Last pushed
Aug 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/davidegat/happy-prompts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...