LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
This framework helps AI product managers and security engineers systematically test their chatbots and generative AI systems for vulnerabilities. You provide it with your chatbot or GenAI system, and it outputs a test report documenting potential issues like prompt injection, data leakage, and misinformation. This is for professionals responsible for the safety and robustness of AI applications.
201 stars. Available on PyPI.
Use this if you need to thoroughly 'red team' your AI models, bots, or applications to find their weaknesses before deployment.
Not ideal if you are looking for a simple API for basic chatbot interaction or general-purpose AI development.
Stars
201
Forks
20
Language
Python
License
—
Category
Last pushed
Jan 16, 2026
Commits (30d)
0
Dependencies
21
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/LLAMATOR-Core/llamator"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...
AI-secure/AgentPoison
[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or...