microsoft/llmail-inject-challenge-analysis
Data Analysis of the results of llmail-inject challenge
This project provides an in-depth analysis of results from a prompt injection challenge focused on large language model (LLM) security within email systems. It takes submissions from attack attempts against an LLM-powered email assistant and evaluates their effectiveness. Security researchers and LLM developers can use this to understand the strengths and weaknesses of different prompt injection defenses.
No commits in the last 6 months.
Use this if you are a security researcher or LLM developer seeking to understand how adaptive prompt injection attacks bypass defenses in email-based LLM applications.
Not ideal if you are looking for an active tool to perform prompt injection attacks or to implement defenses directly, as this is an analysis of a past challenge.
Stars
10
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/microsoft/llmail-inject-challenge-analysis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
guardrails-ai/guardrails
Adding guardrails to large language models.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.