scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
This project helps evaluate how well different AI models can write code to solve complex scientific research problems. It takes a scientific problem description and the AI-generated code as input, then assesses if the code correctly solves the problem. Scientists, researchers, and AI developers can use this to compare and improve AI models for scientific applications.
179 stars.
Use this if you are a scientist or researcher interested in how well AI can assist with coding scientific tasks, or an AI developer looking to benchmark and improve your model's ability to generate scientific code.
Not ideal if you are looking for a tool to solve your scientific coding problems directly, as this project focuses on evaluating AI models rather than providing solutions.
Stars
179
Forks
31
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/scicode-bench/SciCode"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
swefficiency/swefficiency
Benchmark harness and code for "SWE-fficiency: Can Language Models Optimize Real World...