AtomEcho/AtomBulb
旨在对当前主流LLM进行一个直观、具体、标准的评测
This project helps evaluate and compare different large language models (LLMs) like ChatGPT, ChatGLM, and others. It takes a series of questions across various categories (e.g., general knowledge, creative writing, logical reasoning) as input and provides scored answers from multiple LLMs. The output allows researchers, developers, and product managers to understand the strengths and weaknesses of different models.
No commits in the last 6 months.
Use this if you need to quantitatively assess and benchmark the capabilities of various large language models across a standardized set of tasks and domains.
Not ideal if you are looking to fine-tune a specific large language model or build an application on top of an LLM, as this tool focuses on evaluation rather than development.
Stars
94
Forks
5
Language
—
License
—
Category
Last pushed
Jun 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AtomEcho/AtomBulb"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents