jpmorganchase/CyberBench
CyberBench: A Multi-Task Cyber LLM Benchmark
This tool helps cybersecurity researchers and AI developers assess how well large language models (LLMs) understand and process cybersecurity-related text. It takes various LLMs and cybersecurity datasets as input, then measures their performance on tasks like identifying key entities, summarizing incidents, or classifying threats. The output provides insights into each LLM's strengths and weaknesses for cyber applications.
No commits in the last 6 months.
Use this if you are a researcher or developer who needs to rigorously benchmark and compare different LLMs for their effectiveness in cybersecurity natural language processing tasks.
Not ideal if you are a cybersecurity practitioner looking for an out-of-the-box solution to directly analyze security logs or implement threat intelligence, as this is a developer tool for model evaluation.
Stars
30
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jpmorganchase/CyberBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems