perplext/LLMrecon

Enterprise-grade LLM security testing framework implementing OWASP LLM Top 10 with advanced prompt injection, jailbreak techniques, and automated vulnerability discovery for AI safety research.

45
/ 100
Emerging

This tool helps security professionals and AI developers find weaknesses in Large Language Models (LLMs) before they are deployed. It takes your LLM, like GPT-4 or Llama3, and tests it with advanced attack techniques. The output is a report detailing vulnerabilities such as prompt injection risks, data leaks, and other security flaws, ensuring your AI systems are robust and safe.

Use this if you need to thoroughly test your AI models for security vulnerabilities against the latest OWASP LLM Top 10 guidelines and cutting-edge attack methods.

Not ideal if you are a general user looking for an AI model itself, rather than a specialized tool for AI security assessment.

AI-security LLM-testing vulnerability-management AI-safety penetration-testing
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

12

Forks

5

Language

Go

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/perplext/LLMrecon"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.