linhaowei1/kumo
☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models
This project helps evaluate how well large language models (LLMs) can solve complex reasoning problems. It takes a set of predefined truths, actions, and outcomes — like symptoms, tests, and diagnoses in medicine — and generates detailed reasoning games. The output is a benchmark that assesses the LLM's ability to deduce the correct truth efficiently. It's designed for AI researchers, machine learning engineers, and data scientists who are developing or comparing LLMs for tasks requiring logical deduction.
No commits in the last 6 months.
Use this if you need to rigorously test the complex reasoning capabilities of large language models across various domain-specific scenarios, using procedurally generated tasks.
Not ideal if you are looking for a simple, off-the-shelf evaluation of basic language understanding or generation tasks, or if you don't work with LLMs.
Stars
19
Forks
1
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jun 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/linhaowei1/kumo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents