yeasy/ai_security_guide
从原理到实践,全面掌握大语言模型安全攻防之道
This guide helps you understand and defend against security threats in large language models (LLMs) like ChatGPT. It explains how attackers exploit LLMs through methods like prompt injection and data poisoning, and then teaches you how to build secure LLM applications. It's for AI/ML engineers, security specialists, technical managers, and researchers who need to ensure AI systems are safe and compliant.
Use this if you are developing, managing, or securing applications that use large language models and need a comprehensive understanding of their unique security challenges and solutions.
Not ideal if you are looking for a general introduction to AI or only basic information about how LLMs work without a focus on security.
Stars
12
Forks
5
Language
—
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yeasy/ai_security_guide"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
daveebbelaar/ai-cookbook
Examples and tutorials to help developers build AI systems
Explorer-Dong/wiki
个人知识库,包括「CS/AI 基础概念、数据结构与算法、软件开发、大模型」等体系化的学习笔记。持续更新中~
PetroIvaniuk/llms-tools
A list of LLMs Tools & Projects
CrankAddict/section-11
Evidence-based endurance coaching protocol for any AI/LLM. Deterministic training guidance with...
liguodongiot/ai-system
LLM/MLOps/LLMOps