Zierax/Basic-ML-prompt-injections

llm attacks basic payloads

35
/ 100
Emerging

This project helps security professionals and developers understand how attackers can manipulate large language models (LLMs) using various prompt injection techniques. It takes examples of crafted prompts as input and demonstrates how they can bypass security or extract information. The output shows the potential unauthorized access or altered model behavior, which is critical for building robust defenses.

No commits in the last 6 months.

Use this if you are responsible for securing LLM-powered applications and need to understand common attack vectors to protect against them.

Not ideal if you are looking for an automated penetration testing tool or a comprehensive security solution.

LLM security application security vulnerability research AI safety penetration testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

License

MIT

Last pushed

Apr 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Zierax/Basic-ML-prompt-injections"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.