llm-platform-security/chatgpt-plugin-eval

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

31
/ 100
Emerging

This project helps large language model (LLM) platform designers systematically evaluate and improve the security, privacy, and safety of platforms integrating third-party plugins. It takes information on LLM platform architecture and plugin capabilities, and outputs a framework and attack taxonomy to identify potential vulnerabilities. The primary users are security architects and platform designers at companies developing LLM-based services.

No commits in the last 6 months.

Use this if you are designing or managing an LLM platform that integrates third-party plugins and need a systematic way to identify and mitigate security, privacy, and safety risks.

Not ideal if you are an end-user of an LLM platform simply looking for advice on how to use plugins more safely, as this is a framework for platform developers.

LLM platform security API security third-party integration privacy engineering risk assessment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

29

Forks

7

Language

HTML

License

Last pushed

Jul 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/llm-platform-security/chatgpt-plugin-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.