princeton-pli/AdaptMI
[COLM 2025] Adaptive Skill-based In-context Math Instruction for Small Language Models
This project helps AI developers and researchers improve the mathematical reasoning of Small Language Models (SLMs). By adaptively selecting in-context math examples based on an SLM's performance on a given question, it boosts accuracy. You would use this if you are working with SLMs and need to enhance their ability to solve math problems.
No commits in the last 6 months.
Use this if you are building or fine-tuning Small Language Models and want to improve their performance on mathematical reasoning tasks by providing more effective in-context learning examples.
Not ideal if you are working with large language models or focusing on language tasks other than mathematical reasoning.
Stars
9
Forks
4
Language
Python
License
—
Category
Last pushed
Jul 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/princeton-pli/AdaptMI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ExtensityAI/symbolicai
A neurosymbolic perspective on LLMs
TIGER-AI-Lab/MMLU-Pro
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding...
deep-symbolic-mathematics/LLM-SR
[ICLR 2025 Oral] This is the official repo for the paper "LLM-SR" on Scientific Equation...
microsoft/interwhen
A framework for verifiable reasoning with language models.
zhudotexe/fanoutqa
Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language...