Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when interacting with LLMs. It provides a protective layer for API calls, safeguarding against common vulnerabilities and ensuring optimal performance. And safe layer again Prompt Injection.
This tool helps developers protect their applications that use large language models (LLMs) from various security threats. It takes user input or LLM responses and scans them for malicious patterns, then either blocks the input, sanitizes it, or validates the output. Developers building LLM-powered applications, such as chatbots or AI assistants, would use this to ensure the safety and reliability of their systems.
Available on PyPI.
Use this if you are developing an application that uses an LLM and need to protect it from prompt injection, data exfiltration, or other security vulnerabilities, ensuring safe and reliable interactions.
Not ideal if you are a non-technical user looking for a pre-built, end-user security solution for your LLM application, as this is a developer library.
Stars
16
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 19, 2025
Commits (30d)
0
Dependencies
25
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Resk-Security/Resk-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt