corca-ai/awesome-llm-security

A curation of awesome tools, documents and projects about LLM Security.

41
/ 100
Emerging

This project offers a curated list of research papers, benchmarks, and tools focused on securing Large Language Models (LLMs). It helps AI security researchers and practitioners understand and mitigate vulnerabilities like prompt injection, data leakage, and adversarial attacks. You can find comprehensive resources on identifying potential security flaws and implementing defense mechanisms for LLM-powered applications.

1,546 stars. No commits in the last 6 months.

Use this if you are a security researcher, AI safety engineer, or a developer building with LLMs and need to understand the latest threats and defenses to ensure your systems are robust against attacks.

Not ideal if you are looking for a plug-and-play security tool to immediately integrate into your existing LLM application without deeper technical understanding.

AI security LLM safety prompt engineering attacks application security cybersecurity research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 21 / 25

How are scores calculated?

Stars

1,546

Forks

171

Language

License

Last pushed

Aug 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/corca-ai/awesome-llm-security"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.