yeraydoblasbueno/llm-security-framework
Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using Python, Ollama, and an advanced LLM-as-a-Judge evaluation system.
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/yeraydoblasbueno/llm-security-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...