null1024-ws/Poisoning-Attack-on-Code-Completion-Models
USENIX Security'24 Paper Repo
This project helps security researchers and developers understand and demonstrate vulnerabilities in AI-powered code completion tools. It shows how malicious code, disguised as safe, can be injected into the training data of these models. The output is a method to create poisoned code examples that can trick code completion systems into suggesting insecure code.
No commits in the last 6 months.
Use this if you are a security researcher or red team professional aiming to analyze and expose potential weaknesses in large language models used for code completion.
Not ideal if you are looking for a tool to fix vulnerabilities or write secure code directly, as this focuses on demonstrating attack vectors.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
May 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/null1024-ws/Poisoning-Attack-on-Code-Completion-Models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
esbmc/esbmc-ai
Automated Code Repair suite powered by ESBMC and LLMs.
cla7aye15I4nd/PatchAgent
[USENIX Security 25] PatchAgent is a LLM-based practical program repair agent that mimics human...
iSEngLab/AwesomeLLM4APR
[TOSEM 2026]A Systematic Literature Review on Large Language Models for Automated Program Repair
YerbaPage/MGDebugger
Multi-Granularity LLM Debugger [ICSE2026]