InternScience/SciEvalKit
A unified evaluation toolkit and leaderboard for rigorously assessing the scientific intelligence of large language and vision–language models across the full research workflow.
This toolkit helps AI researchers and developers accurately measure how well large language and vision-language models perform on complex scientific tasks, not just general conversations. It takes a model and a set of scientific challenges (like interpreting images, symbolic reasoning, or generating code) and outputs a detailed score, revealing how scientifically intelligent the model truly is across different research workflow stages. Scientists, engineers, and AI developers building or using these advanced models would find this essential for rigorous evaluation.
Use this if you need to rigorously evaluate the scientific intelligence of large language or vision-language models across the entire research workflow, rather than relying on general-purpose benchmarks.
Not ideal if you are looking for a simple, quick way to test a model's basic conversational or broad-domain reasoning abilities.
Stars
74
Forks
10
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternScience/SciEvalKit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents