XMUDeepLIT/TTCS

The code implementation for TTCS: Test-Time Curriculum Synthesis for Self-Evolving.

30
/ 100
Emerging

This framework helps AI researchers and machine learning engineers significantly improve the mathematical reasoning abilities of large language models (LLMs) without needing new training data or human feedback. It takes an existing LLM and mathematical problems as input, then iteratively refines the LLM's problem-solving capabilities by dynamically generating and learning from related, simpler problem variations. The output is a more robust and accurate LLM for mathematical tasks.

Use this if you need to boost the mathematical reasoning performance of your LLMs on specific test problems, especially when fine-tuning with external data or human-labeled examples is impractical or costly.

Not ideal if you are looking for a general-purpose LLM improvement method not specifically focused on mathematical reasoning or if you lack access to GPU resources for training.

LLM mathematical reasoning AI model improvement self-supervised learning test-time training machine learning research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 3 / 25
Community 10 / 25

How are scores calculated?

Stars

39

Forks

4

Language

Python

License

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/XMUDeepLIT/TTCS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.