AINTRUST-AI/aixploit

Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.

44
/ 100
Emerging

This tool helps AI security researchers and Red Teams find weaknesses in large language model AI solutions. You input the AI model you want to test and the types of security attacks you want to simulate (like privacy or integrity breaches). It then shows you which attacks were successful, providing a clear report on how vulnerable your AI system is. This is for security professionals dedicated to safeguarding AI.

No commits in the last 6 months. Available on PyPI.

Use this if you are a red teamer or AI security researcher needing to proactively test the security and robustness of your organization's large language models against various exploitation techniques.

Not ideal if you are looking for a general-purpose AI development framework or a tool for routine model performance evaluation.

AI-security red-teaming penetration-testing LLM-vulnerabilities AI-risk-assessment
Stale 6m
Maintenance 2 / 25
Adoption 4 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

8

Forks

2

Language

Python

License

GPL-3.0

Last pushed

Jun 24, 2025

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AINTRUST-AI/aixploit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.