user1342/Awesome-LLM-Red-Teaming
A curated list of awesome LLM Red Teaming training, resources, and tools.
This resource helps security researchers, AI developers, and auditors identify and exploit vulnerabilities in large language models (LLMs). It provides a curated collection of tools, guides, and research for conducting red-teaming exercises. You'll find resources ranging from practice environments for prompt injection to advanced frameworks for automated adversarial testing, allowing you to expose weaknesses in LLM security and alignment.
No commits in the last 6 months.
Use this if you need to systematically test the security, robustness, and ethical alignment of LLM-powered applications through adversarial attacks and vulnerability research.
Not ideal if you are looking for resources on general LLM development, fine-tuning, or application building, as its focus is exclusively on security and exploitation.
Stars
83
Forks
12
Language
—
License
MIT
Category
Last pushed
Sep 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/user1342/Awesome-LLM-Red-Teaming"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
CryptoAILab/Awesome-LM-SSP
A reading list for large models safety, security, and privacy (including Awesome LLM Security,...
liu673/Awesome-LLM4Security
This project aims to consolidate and share high-quality resources and tools across the...
ElNiak/awesome-ai-cybersecurity
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to...
anmolksachan/AI-ML-Free-Resources-for-Security-and-Prompt-Injection
AI/ML Pentesting Roadmap for Beginners
Ashfaaq98/awesome-genai-cyberhub
A curated list of LLM driven Cyber security Resources