satori-reasoning/Satori
[ICML 2025] Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search
Satori is a large language model designed to tackle complex reasoning tasks, particularly in mathematics and other challenging analytical domains. It takes a problem as input and provides a reasoned solution, including self-reflection and exploration of alternative strategies. This tool is for researchers and practitioners who need advanced problem-solving capabilities from an AI.
109 stars. No commits in the last 6 months.
Use this if you need an AI model to solve difficult, multi-step reasoning problems and provide transparent thought processes, especially in quantitative or logical fields.
Not ideal if your primary need is for creative writing, simple question answering, or tasks where a direct, unreasoned answer is sufficient.
Stars
109
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/satori-reasoning/Satori"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25