SAP/STARS

AI agent whose purpose is to conduct vulnerability tests on LLMs from SAP AI Core or from local deployments, or models from HuggingFace. The goal of this project is to identify and correct any potential security vulnerabilities.

50
/ 100
Established

This tool helps you test your Large Language Models for security vulnerabilities before they are deployed. You provide your LLM, and the tool uses an AI agent to run various attacks like prompt injections or data leakage tests. The output is a report detailing potential weaknesses, allowing you to strengthen your model's defenses. This is for AI security engineers, MLOps specialists, or developers deploying LLMs.

Use this if you need to proactively identify and fix security flaws in your LLMs to prevent misuse or data breaches.

Not ideal if you are looking for a general-purpose functional testing tool rather than a specialized security vulnerability scanner for LLMs.

AI security LLM vulnerability testing MLOps security prompt injection AI model auditing
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

43

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/SAP/STARS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.