kapshaul/LLM-finetune-vuln-detection

Fine-tuning a Large Language Model (LLM) for code vulnerability detection using QLoRA, a method that quantizes the model to 4-bit floats and incorporates adapters for fine-tuning.

20
/ 100
Experimental

This project helps software security engineers and developers more efficiently identify vulnerabilities in code using large language models. It takes source code as input and produces a judgment on whether the code contains security vulnerabilities. The end-user would be a security researcher or a lead developer responsible for code quality and security.

No commits in the last 6 months.

Use this if you need to fine-tune a large language model to detect code vulnerabilities but are limited by GPU memory or computational resources.

Not ideal if you require state-of-the-art performance for vulnerability detection and have access to extensive high-end GPU resources.

code-security vulnerability-detection LLM-fine-tuning software-development AI-security
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Sep 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kapshaul/LLM-finetune-vuln-detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.