Laoyu84/4onebench
A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.
This tool helps you quickly assess how well large language models (LLMs) can create automated 'routines' for orchestrating IT assets like APIs in a business context. You provide a task and a knowledge graph of your IT assets, and it tells you how accurately an LLM can generate the correct sequence of actions in one attempt. This is for professionals building or integrating LLM-powered agents who need to compare different models' ability to automate IT workflows.
No commits in the last 6 months.
Use this if you need to evaluate and compare various LLMs for their capability to generate accurate, single-shot operational routines for IT asset orchestration.
Not ideal if you are looking for a tool to develop or deploy LLM agents directly, or if your primary need is general-purpose LLM evaluation beyond routine generation.
Stars
27
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Laoyu84/4onebench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems