AdityaBhatt3010/Exploiting-vulnerabilities-in-LLM-APIs

Weaponizing LLM prompt injection to hijack user deletion logic — an offensive deep dive into excessive agency abuse.

22
/ 100
Experimental

This project helps security researchers and penetration testers understand and demonstrate how vulnerabilities in Large Language Model (LLM) APIs can be exploited. It shows how to use carefully crafted prompts to discover API access, identify exploitable parameters, and inject malicious commands, ultimately leading to unauthorized actions like file deletion. The outcome is a clear proof-of-concept for LLM prompt injection and command execution.

No commits in the last 6 months.

Use this if you are a cybersecurity professional or ethical hacker investigating potential vulnerabilities in LLM-powered applications and need to understand how to weaponize prompt injection for privilege escalation or unauthorized system access.

Not ideal if you are looking for defensive strategies or code examples to prevent LLM prompt injection, as this project focuses purely on offensive exploitation techniques.

penetration-testing red-teaming vulnerability-research application-security LLM-security
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

License

MIT

Last pushed

Aug 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AdityaBhatt3010/Exploiting-vulnerabilities-in-LLM-APIs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.