debjitpaul/Causal_CoT

About The corresponding code from our paper " Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning" . Do not hesitate to open an issue if you run into any trouble!

37
/ 100
Emerging

This project helps developers evaluate and improve how accurately their large language models explain their answers. You provide your model's reasoning steps for a question, along with a 'preferred' and 'dispreferred' set of explanations, and the tool helps train the model to generate more reliable and faithful reasoning. The end-user persona is an AI/ML developer working with advanced language models who needs to ensure their models' explanations are trustworthy.

Use this if you are developing large language models and need a robust framework to make their generated reasoning more transparent and faithful to the actual answer.

Not ideal if you are looking for a plug-and-play solution for general language model fine-tuning without a focus on detailed reasoning fidelity.

AI Development Natural Language Processing Model Explainability Large Language Models Reasoning Fidelity
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

13

Forks

3

Language

Python

License

Last pushed

Jan 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/debjitpaul/Causal_CoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.