InternScience/SGI-Bench
Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
This project provides a standardized way to test how well large language models (LLMs) can perform tasks across the entire scientific inquiry process, from generating new ideas to interpreting experimental results. It takes a specific LLM and a set of science-aligned problems, then evaluates the model's responses using an agent-based framework and multiple metrics. Scientists, researchers, and AI developers can use this to understand an LLM's 'Scientific General Intelligence' (SGI).
156 stars.
Use this if you need to rigorously evaluate how effectively an AI model can act like a scientist across various tasks and disciplines.
Not ideal if you are looking for an everyday tool to assist with a specific scientific task rather than benchmarking an AI's general scientific capability.
Stars
156
Forks
4
Language
Python
License
MIT
Category
Last pushed
Jan 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternScience/SGI-Bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems