justplus/llm-eval
大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。
This platform helps AI product managers and researchers quickly evaluate the performance of large language models (LLMs). You can upload your own datasets (like Q&A pairs, multiple-choice questions, or RAG data) and it outputs detailed reports on model accuracy, latency, and throughput. It's designed for anyone needing to compare, test, and optimize LLMs for specific applications.
No commits in the last 6 months.
Use this if you need a comprehensive tool to test and compare different large language models using your own specific data and evaluation criteria, including RAG-based scenarios.
Not ideal if you are looking for a simple API or library to integrate LLM evaluation into an existing development pipeline without a user interface.
Stars
82
Forks
18
Language
Python
License
MIT
Category
Last pushed
Aug 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/justplus/llm-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/evalscope
A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation...
izam-mohammed/ragrank
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it...
Kareem-Rashed/rubric-eval
Independent framework to test, benchmark, and evaluate LLMs & AI agents locally.
relari-ai/continuous-eval
Data-Driven Evaluation for LLM-Powered Applications
cleanlab/tlm
Score the trustworthiness of outputs from any LLM in real-time