simboli/security-instructions-extraction-GPTs
Security instructions for custom ChatGPT applications
This project helps developers and security professionals understand how malicious actors might try to extract sensitive instructions or training data from their custom ChatGPT applications. It provides common techniques for both extracting and preventing the extraction of critical prompts and underlying files. The target user is anyone responsible for the security and integrity of a custom GPT, ensuring its private information remains confidential.
No commits in the last 6 months.
Use this if you are building or managing a custom GPT and need to secure its proprietary instructions and training data from unauthorized access.
Not ideal if you are looking for a general guide on securing large language models (LLMs) outside of custom GPT applications.
Stars
9
Forks
2
Language
—
License
MIT
Category
Last pushed
May 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/simboli/security-instructions-extraction-GPTs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTe...
Oct4Pie/zero-zerogpt
Bypassing AI Content Detectors like ZeroGPT and GPTZero with Unicode Spacing
ZacharyZcR/SecGPT
A Test Project for a Network Security-oriented LLM Tool Emulating AutoGPT
ricardobalk/HackGPT
A powerful and customizable ChatGPT-like interface, built for developers.
dylanhogg/gptauthor
GPTAuthor is an AI tool for writing long form, multi-chapter stories given a story prompt.