Tencent/AICGSecEval

A.S.E (AICGSecEval) is a repository-level AI-generated code security evaluation benchmark developed by Tencent Wukong Code Security Team.

63
/ 100
Established

This project helps security researchers and developers assess the security risks of code generated by AI tools. It takes in AI-generated code, simulates real-world development contexts, and then applies a mix of static and dynamic analysis to find vulnerabilities. The output is a detailed security evaluation, allowing users to benchmark and improve the security of AI-assisted programming.

1,143 stars. Actively maintained with 21 commits in the last 30 days.

Use this if you need a comprehensive, project-level benchmark to evaluate and compare the security performance of different AI code generation models or agents.

Not ideal if you are looking for a simple, quick scan for common vulnerabilities in manually written code.

AI-generated code security application security vulnerability assessment DevSecOps AI model evaluation
No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 18 / 25

How are scores calculated?

Stars

1,143

Forks

100

Language

Python

License

Last pushed

Feb 25, 2026

Commits (30d)

21

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/Tencent/AICGSecEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.