olegnazarov/llm-fortress
Enterprise AI Security Platform - Real-time firewall protection for LLM applications against prompt injection, data leakage, and function abuse attacks
This firewall helps organizations protect their AI applications from security threats. It takes incoming requests to your AI models and scans them for malicious content, blocking attacks like prompt injection and data leaks. The output is a secure interaction with your AI, along with detailed security reports and alerts, making it ideal for cybersecurity teams, AI product managers, and enterprise IT departments.
No commits in the last 6 months.
Use this if you need to secure your large language model applications against prompt injection, data leakage, and unauthorized function abuse, providing real-time protection and monitoring.
Not ideal if you are looking for a general-purpose network firewall or a solution for non-LLM specific application security.
Stars
23
Forks
5
Language
Python
License
MIT
Category
Last pushed
Sep 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/olegnazarov/llm-fortress"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...