PurCL/ProSec

Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"

24
/ 100
Experimental

This project helps security researchers and AI engineers evaluate and enhance the security of large language models (LLMs) that generate code. It takes an LLM and a set of instructions, then systematically generates potentially vulnerable and benign code, detects vulnerabilities, proposes fixes, and creates a dataset for further training. This process allows users to proactively align code-generating LLMs with security best practices.

Use this if you are developing or evaluating code-generating LLMs and need to create robust datasets to improve their security against common weaknesses.

Not ideal if you are an end-user developer looking for a tool to scan your own application code for vulnerabilities; this tool is for fortifying the LLM itself.

AI-security LLM-fine-tuning code-vulnerability-detection secure-code-generation AI-model-evaluation
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

17

Forks

Language

Python

License

Last pushed

Feb 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/PurCL/ProSec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.