ScholarXIV/enkokilish_bench
Amharic Riddle Benchmark for LLMs
This tool helps researchers and linguists assess how well large language models (LLMs) can understand and solve Amharic riddles. You provide an LLM and it outputs an evaluation of the model's reasoning capabilities with Amharic language puzzles. It's designed for anyone working with or researching Amharic language models.
Use this if you need to systematically test and compare different LLMs' proficiency in Amharic riddle comprehension and problem-solving.
Not ideal if you're looking to generate new Amharic riddles or need a tool for general Amharic language translation or text generation.
Stars
25
Forks
1
Language
Svelte
License
MIT
Category
Last pushed
Dec 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ScholarXIV/enkokilish_bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems