AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection

A real-world look at how hidden instructions in profiles and emails trick AI into unexpected outputs, revealing the subtle risks of indirect prompt injection.

22
/ 100
Experimental

This project explores how hidden commands within public text, like LinkedIn profiles or email bodies, can trick AI tools into performing unexpected actions. It shows how AI-powered recruitment bots or email summarizers might misinterpret innocent data as direct instructions, leading to unusual or even misleading outputs. Anyone who uses or develops AI features that process untrusted external content, such as HR professionals using AI for candidate outreach or users relying on AI email summarizers, should be aware of these risks.

No commits in the last 6 months.

Use this if you are a product manager, security professional, or AI developer concerned about the vulnerabilities of AI systems when processing external, untrusted content from sources like social media or emails.

Not ideal if you are looking for a technical library or code to directly implement security fixes; this project focuses on demonstrating and explaining the phenomenon of indirect prompt injection.

AI-safety prompt-injection cybersecurity AI-ethics recruitment-tech
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

License

MIT

Category

ai-red-teaming

Last pushed

Sep 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.