justplus/llm-eval

大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。

45
/ 100
Emerging

This platform helps AI product managers and researchers quickly evaluate the performance of large language models (LLMs). You can upload your own datasets (like Q&A pairs, multiple-choice questions, or RAG data) and it outputs detailed reports on model accuracy, latency, and throughput. It's designed for anyone needing to compare, test, and optimize LLMs for specific applications.

No commits in the last 6 months.

Use this if you need a comprehensive tool to test and compare different large language models using your own specific data and evaluation criteria, including RAG-based scenarios.

Not ideal if you are looking for a simple API or library to integrate LLM evaluation into an existing development pipeline without a user interface.

AI-evaluation LLM-benchmarking NLP-testing model-comparison RAG-assessment
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 19 / 25

How are scores calculated?

Stars

82

Forks

18

Language

Python

License

MIT

Last pushed

Aug 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/justplus/llm-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.