perplext/LLMrecon
Enterprise-grade LLM security testing framework implementing OWASP LLM Top 10 with advanced prompt injection, jailbreak techniques, and automated vulnerability discovery for AI safety research.
This tool helps security professionals and AI developers find weaknesses in Large Language Models (LLMs) before they are deployed. It takes your LLM, like GPT-4 or Llama3, and tests it with advanced attack techniques. The output is a report detailing vulnerabilities such as prompt injection risks, data leaks, and other security flaws, ensuring your AI systems are robust and safe.
Use this if you need to thoroughly test your AI models for security vulnerabilities against the latest OWASP LLM Top 10 guidelines and cutting-edge attack methods.
Not ideal if you are a general user looking for an AI model itself, rather than a specialized tool for AI security assessment.
Stars
12
Forks
5
Language
Go
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/perplext/LLMrecon"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GreyDGL/PentestGPT
Automated Penetration Testing Agentic Framework Powered by Large Language Models
berylliumsec/nebula
AI-powered penetration testing assistant for automating recon, note-taking, and vulnerability analysis.
ipa-lab/hackingBuddyGPT
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
MorDavid/BruteForceAI
Advanced LLM-powered brute-force tool combining AI intelligence with automated login attacks
mbrg/power-pwn
An offensive/defense security toolset for discovery, recon and ethical assessment of AI Agents