PurCL/ProSec
Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"
This project helps security researchers and AI engineers evaluate and enhance the security of large language models (LLMs) that generate code. It takes an LLM and a set of instructions, then systematically generates potentially vulnerable and benign code, detects vulnerabilities, proposes fixes, and creates a dataset for further training. This process allows users to proactively align code-generating LLMs with security best practices.
Use this if you are developing or evaluating code-generating LLMs and need to create robust datasets to improve their security against common weaknesses.
Not ideal if you are an end-user developer looking for a tool to scan your own application code for vulnerabilities; this tool is for fortifying the LLM itself.
Stars
17
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/PurCL/ProSec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25