codelion/pts
Pivotal Token Search
This tool helps AI model developers and researchers understand how large language models (LLMs) arrive at their answers. It takes an LLM's generated responses to problems, like those from mathematical or reasoning tasks, and identifies specific words or phrases that critically influence whether the model succeeds or fails. You can then use this insight to fine-tune your models or guide their behavior during generation.
146 stars.
Use this if you need to pinpoint exactly which parts of an LLM's thought process lead to correct or incorrect outcomes, allowing you to improve model performance and interpretability.
Not ideal if you are looking for a general-purpose model evaluation tool or if your focus is not on detailed, token-level or sentence-level reasoning analysis for LLMs.
Stars
146
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/codelion/pts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
dannylee1020/openpo
Building synthetic data for preference tuning
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment
pspdada/Uni-DPO
[ICLR 2026] Official repository of "Uni-DPO: A Unified Paradigm for Dynamic Preference...
liushunyu/awesome-direct-preference-optimization
A Survey of Direct Preference Optimization (DPO)