jpmorganchase/CyberBench

CyberBench: A Multi-Task Cyber LLM Benchmark

37
/ 100
Emerging

This tool helps cybersecurity researchers and AI developers assess how well large language models (LLMs) understand and process cybersecurity-related text. It takes various LLMs and cybersecurity datasets as input, then measures their performance on tasks like identifying key entities, summarizing incidents, or classifying threats. The output provides insights into each LLM's strengths and weaknesses for cyber applications.

No commits in the last 6 months.

Use this if you are a researcher or developer who needs to rigorously benchmark and compare different LLMs for their effectiveness in cybersecurity natural language processing tasks.

Not ideal if you are a cybersecurity practitioner looking for an out-of-the-box solution to directly analyze security logs or implement threat intelligence, as this is a developer tool for model evaluation.

cybersecurity-research LLM-evaluation natural-language-processing AI-model-benchmarking threat-intelligence-NLP
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

30

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Apr 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jpmorganchase/CyberBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.