scicode-bench/SciCode

A benchmark that challenges language models to code solutions for scientific problems

55
/ 100
Established

This project helps evaluate how well different AI models can write code to solve complex scientific research problems. It takes a scientific problem description and the AI-generated code as input, then assesses if the code correctly solves the problem. Scientists, researchers, and AI developers can use this to compare and improve AI models for scientific applications.

179 stars.

Use this if you are a scientist or researcher interested in how well AI can assist with coding scientific tasks, or an AI developer looking to benchmark and improve your model's ability to generate scientific code.

Not ideal if you are looking for a tool to solve your scientific coding problems directly, as this project focuses on evaluating AI models rather than providing solutions.

scientific-computing research-automation computational-science AI-evaluation model-benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

179

Forks

31

Language

Python

License

Apache-2.0

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/scicode-bench/SciCode"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.