Repello-AI/whistleblower
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
This tool helps AI engineers and security researchers understand the underlying instructions of an AI application. By feeding specific queries into an AI application, it analyzes the generated responses to reveal the hidden system prompt. This process helps uncover potential vulnerabilities or gain insights into the AI's intended behavior.
149 stars.
Use this if you are an AI engineer or security researcher needing to reverse-engineer or audit the system prompt of an LLM-based application exposed via an API.
Not ideal if you are an end-user simply using an AI application and not looking to understand its internal system instructions or security posture.
Stars
149
Forks
27
Language
Python
License
—
Category
Last pushed
Oct 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Repello-AI/whistleblower"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...