kapshaul/LLM-finetune-vuln-detection
Fine-tuning a Large Language Model (LLM) for code vulnerability detection using QLoRA, a method that quantizes the model to 4-bit floats and incorporates adapters for fine-tuning.
This project helps software security engineers and developers more efficiently identify vulnerabilities in code using large language models. It takes source code as input and produces a judgment on whether the code contains security vulnerabilities. The end-user would be a security researcher or a lead developer responsible for code quality and security.
No commits in the last 6 months.
Use this if you need to fine-tune a large language model to detect code vulnerabilities but are limited by GPU memory or computational resources.
Not ideal if you require state-of-the-art performance for vulnerability detection and have access to extensive high-end GPU resources.
Stars
10
Forks
1
Language
Python
License
—
Category
Last pushed
Sep 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kapshaul/LLM-finetune-vuln-detection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.