KryptSec/oasis

Open-source AI security benchmarking CLI. Measure how AI models perform offensive security tasks with MITRE ATT&CK analysis and KSM scoring.

53
/ 100
Established

This tool helps cybersecurity professionals and red teamers understand how well different AI models can perform offensive security tasks like finding and exploiting vulnerabilities. You provide an AI model and a target system (a "challenge"), and it outputs a detailed report on the AI's performance, including MITRE ATT&CK analysis and a KSM security score. Security researchers, penetration testers, and AI security evaluators would find this useful for benchmarking and comparing AI offensive capabilities.

Available on npm.

Use this if you need to objectively benchmark and analyze the offensive security capabilities of various AI models against standardized vulnerabilities, generating detailed, reproducible reports.

Not ideal if you are looking for a defensive AI security tool or a solution for evaluating non-offensive AI model risks.

AI-security-evaluation penetration-testing red-teaming vulnerability-analysis AI-benchmarking
Maintenance 10 / 25
Adoption 6 / 25
Maturity 22 / 25
Community 15 / 25

How are scores calculated?

Stars

16

Forks

4

Language

TypeScript

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KryptSec/oasis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.