BlackTechX011/HacxGPT-Jailbreak-prompts
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like ChatGPT, LLaMA, and more with the world's most advanced Jailbreak prompts 🔓.
This project provides specialized prompts designed to bypass the safety and ethical guardrails in large AI models like ChatGPT or LLaMA. By inputting these meticulously crafted prompts, users can override the AI's default programming, enabling it to generate responses that it would otherwise refuse. This is for AI researchers and security professionals who need to test the boundaries and vulnerabilities of AI safety systems.
155 stars. No commits in the last 6 months.
Use this if you are an AI researcher or security professional looking to rigorously test and understand the alignment and safety weaknesses of large language models through controlled experiments.
Not ideal if you intend to use AI models responsibly and ethically within their designed safety parameters.
Stars
155
Forks
17
Language
—
License
—
Category
Last pushed
May 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/BlackTechX011/HacxGPT-Jailbreak-prompts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...