user1342/Folly

Open-source LLM Prompt-Injection and Jailbreaking Playground

38
/ 100
Emerging

This tool helps security professionals and developers test Large Language Models (LLMs) for vulnerabilities like prompt injection and jailbreaking. You provide an LLM's API endpoint and a set of test challenges, and Folly simulates various attacks, showing you how your LLM responds. It's for anyone building or deploying LLMs who needs to ensure their models are secure against malicious inputs.

No commits in the last 6 months.

Use this if you need to rigorously test the security of your Large Language Models against known prompt injection and jailbreaking techniques.

Not ideal if you're looking for a general-purpose tool to evaluate LLM performance or fine-tune models for specific tasks rather than security auditing.

LLM security prompt engineering vulnerability testing AI safety red teaming
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

30

Forks

5

Language

Python

License

GPL-3.0

Last pushed

Jul 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/user1342/Folly"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.