terjanq/hack-a-prompt
Tools and our test data developed for the HackAPrompt 2023 competition
This project provides tools and a collection of tested prompts designed for auditing and understanding Large Language Models (LLMs). It helps security researchers and prompt engineers explore the vulnerabilities and behaviors of LLMs by providing a framework to test various input prompts and observe the model's responses. The output includes specific prompts that can trigger unexpected or vulnerable LLM behaviors, which are valuable for understanding model safety and robustness.
No commits in the last 6 months.
Use this if you are a security researcher or an AI safety expert looking to systematically test and uncover vulnerabilities or unusual behaviors in Large Language Models using a competitive, benchmarked set of prompts and tools.
Not ideal if you are looking for a general-purpose prompt engineering library for application development or a tool to simply generate creative text, as its focus is on adversarial testing and security auditing.
Stars
47
Forks
7
Language
HTML
License
MIT
Category
Last pushed
Oct 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/terjanq/hack-a-prompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
panaversity/learn-modern-ai-python
Learn Modern AI Assisted Python with Type Hints
microsoft/PromptCraft-Robotics
Community for applying LLMs to robotics and a robot simulator with ChatGPT integration
isLinXu/prompt-engineering-note
🔥🔔prompt-engineering-note🔔🔥
aloth/RogueGPT
RogueGPT: A controlled stimulus generator for AI news authenticity research. (arXiv:2601.21963...
shun0t/versatile_bot_project
Transforming NotebookLM into a versatile bot