lin-tan/llm-vul
For our ISSTA23 paper "How Effective are Neural Networks for Fixing Security Vulnerabilities?" by Yi Wu, Nan Jiang, Hung Viet Pham, Thibaud Lutellier, Jordan Davis, Lin Tan, Petr Babkin, and Sameena Shah.
This project provides tools and data to evaluate how well large language models (LLMs) can fix security vulnerabilities in Java code. It helps security researchers and software engineers assess the effectiveness of AI models in automatically patching known weaknesses. You input vulnerable Java projects from Vul4J or VJBench, and it outputs generated code patches and their validation results, indicating if the fixes compile and pass tests.
No commits in the last 6 months.
Use this if you are a security researcher or software engineer who wants to rigorously evaluate or compare the performance of different AI models in automatically fixing security flaws in Java applications.
Not ideal if you are looking for a ready-to-use automated program repair tool to fix vulnerabilities in your production code, as this project is focused on research and evaluation.
Stars
41
Forks
4
Language
Java
License
—
Category
Last pushed
Nov 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lin-tan/llm-vul"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
esbmc/esbmc-ai
Automated Code Repair suite powered by ESBMC and LLMs.
cla7aye15I4nd/PatchAgent
[USENIX Security 25] PatchAgent is a LLM-based practical program repair agent that mimics human...
iSEngLab/AwesomeLLM4APR
[TOSEM 2026]A Systematic Literature Review on Large Language Models for Automated Program Repair
YerbaPage/MGDebugger
Multi-Granularity LLM Debugger [ICSE2026]