user1342/Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground
This tool helps security professionals and developers test Large Language Models (LLMs) for vulnerabilities like prompt injection and jailbreaking. You provide an LLM's API endpoint and a set of test challenges, and Folly simulates various attacks, showing you how your LLM responds. It's for anyone building or deploying LLMs who needs to ensure their models are secure against malicious inputs.
No commits in the last 6 months.
Use this if you need to rigorously test the security of your Large Language Models against known prompt injection and jailbreaking techniques.
Not ideal if you're looking for a general-purpose tool to evaluate LLM performance or fine-tune models for specific tasks rather than security auditing.
Stars
30
Forks
5
Language
Python
License
GPL-3.0
Category
Last pushed
Jul 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/user1342/Folly"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...