principia-ai/PhysGym

A benchmark suite for evaluating LLM-based interactive scientific reasoning.

43
/ 100
Emerging

This project helps AI researchers and scientists rigorously test how well large language models (LLMs) can discover physics laws. It takes an LLM agent and a physics problem, systematically controlling what prior information the agent receives (from full context to anonymous variables). The output shows how successfully the LLM deduces or experiments to find the correct physics equation, revealing whether it's memorizing or truly reasoning.

Use this if you are an AI researcher or cognitive scientist developing or evaluating LLM agents for scientific discovery and need to understand how different levels of prior knowledge impact their reasoning abilities.

Not ideal if you are looking for an off-the-shelf tool to solve practical physics problems or to apply LLMs in an industrial setting without deep research into their discovery capabilities.

AI-research scientific-discovery LLM-evaluation physics-modeling cognitive-science
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 15 / 25

How are scores calculated?

Stars

92

Forks

12

Language

Python

License

MIT

Last pushed

Jan 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/principia-ai/PhysGym"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.