ThorneShadowbane/ai-code-guard

Detect security vulnerabilities in AI-generated code

35
/ 100
Emerging

This tool helps software development teams ensure the code generated by AI assistants (like GitHub Copilot or ChatGPT) is secure before it goes into production. It takes your project's codebase as input and identifies potential security flaws, outputting a clear report of vulnerabilities like exposed secrets, injection risks, or AI-specific issues. It's designed for developers, security engineers, or dev leads who integrate AI coding tools into their workflow.

Use this if your team uses AI coding assistants and you need an automated way to detect security vulnerabilities that traditional scanners might miss, especially those unique to AI-generated code.

Not ideal if you're looking for a general-purpose security scanner for human-written code only, or if your team doesn't use AI for code generation at all.

application-security software-development secure-coding devsecops ai-development
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

MIT

Last pushed

Jan 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ThorneShadowbane/ai-code-guard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.