babelcloud/LLM-RGB
LLM Reasoning and Generation Benchmark. Evaluate LLMs in complex scenarios systematically.
This project offers a collection of detailed test cases (prompts) to evaluate how well Large Language Models (LLMs) can reason and generate responses in complex scenarios. It takes in various LLMs and returns a performance score based on how accurately they follow instructions, handle long contexts, and perform multi-step reasoning. This is for AI/ML engineers, product managers, or researchers who need to rigorously assess LLM capabilities beyond simple chat interactions.
166 stars. No commits in the last 6 months.
Use this if you need to systematically benchmark different LLMs or monitor the performance of your LLM in real-world applications that involve lengthy inputs, intricate logic, or strict output formats.
Not ideal if you are looking for a comprehensive, all-encompassing LLM benchmark or a tool to evaluate simple, conversational AI interactions.
Stars
166
Forks
16
Language
TypeScript
License
MIT
Category
Last pushed
May 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/babelcloud/LLM-RGB"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library