Awesome-LLM-Red-Teaming and awesome-llm-security

These are competitors, as both are curated lists of resources for the general topic of LLM security, with one specifically focused on red teaming.

awesome-llm-security
29
Experimental
Maintenance 2/25
Adoption 9/25
Maturity 16/25
Community 15/25
Maintenance 13/25
Adoption 4/25
Maturity 3/25
Community 9/25
Stars: 83
Forks: 12
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 7
Forks: 1
Downloads:
Commits (30d): 0
Language:
License:
Stale 6m No Package No Dependents
No License No Package No Dependents

About Awesome-LLM-Red-Teaming

user1342/Awesome-LLM-Red-Teaming

A curated list of awesome LLM Red Teaming training, resources, and tools.

This resource helps security researchers, AI developers, and auditors identify and exploit vulnerabilities in large language models (LLMs). It provides a curated collection of tools, guides, and research for conducting red-teaming exercises. You'll find resources ranging from practice environments for prompt injection to advanced frameworks for automated adversarial testing, allowing you to expose weaknesses in LLM security and alignment.

AI security red teaming vulnerability research LLM auditing prompt engineering

About awesome-llm-security

beyefendi/awesome-llm-security

Awesome LLM security tools, research, and documents

Scores updated daily from GitHub, PyPI, and npm data. How scores work