HumanCompatibleAI/tensor-trust

A prompt injection game to collect data for robust ML research

34
/ 100
Emerging

This project offers an interactive web game designed to help researchers understand and collect data on 'prompt injection attacks' in large language models. Players interact with AI agents, attempting to trick them into revealing secret information or performing unintended actions. The output is valuable data for improving the security and robustness of AI systems, primarily used by machine learning researchers and AI security specialists.

No commits in the last 6 months.

Use this if you are an AI researcher or security specialist looking to collect empirical data on prompt injection vulnerabilities through a gamified approach.

Not ideal if you are an end-user simply wanting to test an existing language model's prompt injection resistance, or if you need a general-purpose dataset for training other types of AI models.

AI security research large language models prompt engineering adversarial machine learning AI vulnerability assessment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

67

Forks

6

Language

Python

License

BSD-2-Clause

Last pushed

Jan 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/HumanCompatibleAI/tensor-trust"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.