VikhrModels/ru_llm_arena
Modified Arena-Hard-Auto LLM evaluation toolkit with an emphasis on Russian language
This tool helps evaluate how well different large language models (LLMs) perform when responding to prompts in Russian. It takes a list of LLM names and a set of 500 diverse Russian prompts, then automatically compares each LLM's answers against a baseline model (GPT-3.5-turbo-0125). The output is a clear ELO ranking and win-rate score for each LLM, showing which ones are best suited for generating Russian text. This is designed for researchers, AI product managers, and developers who need to select the most capable LLMs for Russian-language applications.
No commits in the last 6 months.
Use this if you need an automated, objective, and detailed comparison of various LLMs for their quality in generating responses in Russian.
Not ideal if you are evaluating English-language LLMs or if you prefer manual, human-based evaluation over automated metrics.
Stars
47
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/VikhrModels/ru_llm_arena"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents