simboli/security-instructions-extraction-GPTs

Security instructions for custom ChatGPT applications

36
/ 100
Emerging

This project helps developers and security professionals understand how malicious actors might try to extract sensitive instructions or training data from their custom ChatGPT applications. It provides common techniques for both extracting and preventing the extraction of critical prompts and underlying files. The target user is anyone responsible for the security and integrity of a custom GPT, ensuring its private information remains confidential.

No commits in the last 6 months.

Use this if you are building or managing a custom GPT and need to secure its proprietary instructions and training data from unauthorized access.

Not ideal if you are looking for a general guide on securing large language models (LLMs) outside of custom GPT applications.

AI security prompt engineering GPT development data privacy application security
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

License

MIT

Last pushed

May 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/simboli/security-instructions-extraction-GPTs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.