CryptoAILab/Awesome-LM-SSP

A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).

60
/ 100
Established

This resource helps researchers and practitioners in the field of large models understand and mitigate risks related to safety, security, and privacy. It provides a curated reading list and database of research papers, books, competitions, and toolkits on topics like jailbreaking, adversarial attacks, and data privacy. Anyone working on or deploying large language, vision-language, or diffusion models would find this valuable.

1,882 stars. Actively maintained with 12 commits in the last 30 days.

Use this if you need to quickly find academic papers, benchmarks, or toolkits related to ensuring the trustworthiness of large AI models.

Not ideal if you are looking for an executable software library or an in-depth tutorial on how to implement specific safety features.

AI Safety Model Security Data Privacy Large Language Models AI Ethics
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,882

Forks

122

Language

License

Apache-2.0

Last pushed

Mar 04, 2026

Commits (30d)

12

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/CryptoAILab/Awesome-LM-SSP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.