TrustAI-laboratory/Learn-Prompt-Hacking
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
This course helps AI developers and data scientists understand how to best craft instructions for large language models (LLMs) and identify their vulnerabilities. It provides techniques for improving how LLMs respond to prompts and explores methods for 'prompt hacking' and ensuring LLM security. The output is a deeper knowledge of advanced prompt engineering and defense against common LLM security risks.
271 stars. No commits in the last 6 months.
Use this if you are an AI developer or data scientist looking to master prompt engineering and understand LLM security to build robust, secure AI applications.
Not ideal if you are a non-technical user simply looking for basic tips on writing prompts for tools like ChatGPT without diving into the underlying security or engineering principles.
Stars
271
Forks
34
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/TrustAI-laboratory/Learn-Prompt-Hacking"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...