Azure/PyRIT

The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.

61
/ 100
Established

This tool helps security professionals and engineers proactively assess and identify potential risks in generative AI systems. You provide your AI models and various inputs, and it helps uncover vulnerabilities and unsafe behaviors, ultimately making your AI systems more secure. It's designed for those responsible for the safety and ethical deployment of AI.

3,546 stars. Actively maintained with 7 commits in the last 30 days.

Use this if you need to systematically test your generative AI applications for security flaws, biases, or unwanted outputs before they reach end-users.

Not ideal if you are looking for a general-purpose AI development framework or a tool to build AI models from scratch.

AI-security risk-assessment generative-AI-safety AI-red-teaming vulnerability-testing
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 25 / 25

How are scores calculated?

Stars

3,546

Forks

690

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/Azure/PyRIT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.