fabraix/playground

A live environment to stress-test AI agent defenses through adversarial play 🧠

27
/ 100
Experimental

This project helps AI developers and security researchers uncover vulnerabilities in AI agents by providing a live environment for adversarial testing. You can propose scenarios where an AI agent, with specific instructions and tools, is challenged to bypass its safety guardrails. The system then publishes successful "jailbreak" techniques, helping the community understand and build more secure AI systems.

Use this if you are an AI developer, security researcher, or ML engineer looking to stress-test AI agent defenses in a live environment and contribute to collective knowledge about AI security.

Not ideal if you are looking for a simple AI library, a general-purpose AI development tool, or a consumer-facing application.

AI-security AI-safety red-teaming vulnerability-testing agent-development
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

21

Forks

Language

TypeScript

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/fabraix/playground"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.