lin-tan/llm-vul

For our ISSTA23 paper "How Effective are Neural Networks for Fixing Security Vulnerabilities?" by Yi Wu, Nan Jiang, Hung Viet Pham, Thibaud Lutellier, Jordan Davis, Lin Tan, Petr Babkin, and Sameena Shah.

33
/ 100
Emerging

This project provides tools and data to evaluate how well large language models (LLMs) can fix security vulnerabilities in Java code. It helps security researchers and software engineers assess the effectiveness of AI models in automatically patching known weaknesses. You input vulnerable Java projects from Vul4J or VJBench, and it outputs generated code patches and their validation results, indicating if the fixes compile and pass tests.

No commits in the last 6 months.

Use this if you are a security researcher or software engineer who wants to rigorously evaluate or compare the performance of different AI models in automatically fixing security flaws in Java applications.

Not ideal if you are looking for a ready-to-use automated program repair tool to fix vulnerabilities in your production code, as this project is focused on research and evaluation.

security-vulnerability-research automated-program-repair software-security-engineering java-development llm-evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

41

Forks

4

Language

Java

License

Last pushed

Nov 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lin-tan/llm-vul"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.