kieranklaassen/leva
LLM Evaluation Framework for Rails apps to be used with production data.
This framework helps product teams and machine learning engineers fine-tune and improve their AI models built into Ruby on Rails applications. It takes your existing application data and an AI model's outputs, then lets you systematically evaluate how well the AI performs on real-world scenarios. This allows you to continuously refine the AI's prompts and logic, ensuring it delivers accurate and useful results for your users.
133 stars.
Use this if you are a product manager or an engineer running a Rails application with an integrated AI model and need a structured way to test and improve its accuracy against production data.
Not ideal if your application isn't built with Ruby on Rails, or if you don't have an existing AI model integrated into a production environment.
Stars
133
Forks
7
Language
HTML
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/kieranklaassen/leva"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents