null1024-ws/Poisoning-Attack-on-Code-Completion-Models

USENIX Security'24 Paper Repo

12
/ 100
Experimental

This project helps security researchers and developers understand and demonstrate vulnerabilities in AI-powered code completion tools. It shows how malicious code, disguised as safe, can be injected into the training data of these models. The output is a method to create poisoned code examples that can trick code completion systems into suggesting insecure code.

No commits in the last 6 months.

Use this if you are a security researcher or red team professional aiming to analyze and expose potential weaknesses in large language models used for code completion.

Not ideal if you are looking for a tool to fix vulnerabilities or write secure code directly, as this focuses on demonstrating attack vectors.

Application Security Red Teaming Code Security AI Model Auditing Vulnerability Research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

May 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/null1024-ws/Poisoning-Attack-on-Code-Completion-Models"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.