AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI into unexpected outputs, revealing the subtle risks of indirect prompt injection.
This project explores how hidden commands within public text, like LinkedIn profiles or email bodies, can trick AI tools into performing unexpected actions. It shows how AI-powered recruitment bots or email summarizers might misinterpret innocent data as direct instructions, leading to unusual or even misleading outputs. Anyone who uses or develops AI features that process untrusted external content, such as HR professionals using AI for candidate outreach or users relying on AI email summarizers, should be aware of these risks.
No commits in the last 6 months.
Use this if you are a product manager, security professional, or AI developer concerned about the vulnerabilities of AI systems when processing external, untrusted content from sources like social media or emails.
Not ideal if you are looking for a technical library or code to directly implement security fixes; this project focuses on demonstrating and explaining the phenomenon of indirect prompt injection.
Stars
12
Forks
—
Language
—
License
MIT
Category
Last pushed
Sep 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with...
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt...
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...