yeasy/ai_security_guide

从原理到实践,全面掌握大语言模型安全攻防之道

33
/ 100
Emerging

This guide helps you understand and defend against security threats in large language models (LLMs) like ChatGPT. It explains how attackers exploit LLMs through methods like prompt injection and data poisoning, and then teaches you how to build secure LLM applications. It's for AI/ML engineers, security specialists, technical managers, and researchers who need to ensure AI systems are safe and compliant.

Use this if you are developing, managing, or securing applications that use large language models and need a comprehensive understanding of their unique security challenges and solutions.

Not ideal if you are looking for a general introduction to AI or only basic information about how LLMs work without a focus on security.

AI Security Large Language Models Cybersecurity AI Governance Application Security
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 15 / 25

How are scores calculated?

Stars

12

Forks

5

Language

License

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yeasy/ai_security_guide"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.