naivoder/MCTSr

Monte Carlo Tree Search Self-Refine (MCTSr)

22
/ 100
Experimental

This project helps AI researchers and developers evaluate the problem-solving abilities of large language models (LLMs). By feeding mathematical word problems or complex math equations into a local LLaMA instance, it systematically tests how well the model generates correct answers and refines its reasoning. The output provides insights into the LLM's performance on these challenging datasets.

No commits in the last 6 months.

Use this if you are an AI researcher or developer focused on understanding and improving the mathematical reasoning capabilities of LLMs.

Not ideal if you are looking for a fully polished, production-ready tool for general LLM evaluation without deep technical engagement.

AI-research LLM-evaluation mathematical-reasoning natural-language-processing model-benchmarking
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

22

Forks

2

Language

Python

License

Last pushed

Jul 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/naivoder/MCTSr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.