ReversecLabs/spikee
Simple Prompt Injection Kit for Evaluation and Exploitation
This toolkit helps security engineers and red teamers evaluate how well large language models (LLMs), their applications, and safety guardrails can withstand malicious input. You provide various prompt injections and jailbreaking attempts, and the tool assesses the LLM's response to reveal vulnerabilities. This is essential for anyone developing or deploying LLM-powered systems.
162 stars.
Use this if you need to systematically test the security and robustness of your LLM-based applications against prompt injection and jailbreaking attacks.
Not ideal if you are looking for a general-purpose LLM development framework or a tool for routine content moderation.
Stars
162
Forks
35
Language
HTML
License
Apache-2.0
Category
Last pushed
Mar 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ReversecLabs/spikee"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
microsoft/prompty
Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI...
google/dotprompt
Executable GenAI prompt templates
svilupp/PromptingTools.jl
Streamline your life using PromptingTools.jl, the Julia package that simplifies interacting with...
elastacloud/DotPrompt
Library allowing you to use GenAI prompts saved as .prompt files, keeping your prompts organised...
betterprompt-com/Rocket
Better Prompt Rocket is an AI-optimised, accessibility-first web template designed for the...