Vibhanshu-555/Human-Aligned-LLM-Evaluation-Audit
A data-driven audit of AI judge reliability using MT-Bench human annotations. This project analyzes 3,500+ model comparisons across 6 LLMs and 8 task categories to measure how well GPT-4 evaluations align with human judgment. Includes Python workflow, disagreement metrics, and a Power BI dashboard for insights.
Stars
—
Forks
—
Language
HTML
License
MIT
Category
Last pushed
Nov 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Vibhanshu-555/Human-Aligned-LLM-Evaluation-Audit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents
aiverify-foundation/moonshot
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
parameterlab/MASEval
Multi-Agent LLM Evaluation
mohsenhariri/scorio
Statistical evaluation, comparison, and ranking of Large Language Models
fiddler-labs/fiddler-auditor
Fiddler Auditor is a tool to evaluate language models.