AdityaBhatt3010/Exploiting-vulnerabilities-in-LLM-APIs
Weaponizing LLM prompt injection to hijack user deletion logic — an offensive deep dive into excessive agency abuse.
This project helps security researchers and penetration testers understand and demonstrate how vulnerabilities in Large Language Model (LLM) APIs can be exploited. It shows how to use carefully crafted prompts to discover API access, identify exploitable parameters, and inject malicious commands, ultimately leading to unauthorized actions like file deletion. The outcome is a clear proof-of-concept for LLM prompt injection and command execution.
No commits in the last 6 months.
Use this if you are a cybersecurity professional or ethical hacker investigating potential vulnerabilities in LLM-powered applications and need to understand how to weaponize prompt injection for privilege escalation or unauthorized system access.
Not ideal if you are looking for defensive strategies or code examples to prevent LLM prompt injection, as this project focuses purely on offensive exploitation techniques.
Stars
10
Forks
—
Language
—
License
MIT
Category
Last pushed
Aug 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AdityaBhatt3010/Exploiting-vulnerabilities-in-LLM-APIs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GreyDGL/PentestGPT
Automated Penetration Testing Agentic Framework Powered by Large Language Models
berylliumsec/nebula
AI-powered penetration testing assistant for automating recon, note-taking, and vulnerability analysis.
ipa-lab/hackingBuddyGPT
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
MorDavid/BruteForceAI
Advanced LLM-powered brute-force tool combining AI intelligence with automated login attacks
mbrg/power-pwn
An offensive/defense security toolset for discovery, recon and ethical assessment of AI Agents