xf-zhao/LoT
Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"
This project helps AI developers and researchers improve how large language models (LLMs) reason, especially for complex, multi-step problems without prior examples. It takes an LLM's initial thought process and applies logical principles to verify and refine each step, reducing errors and 'hallucinations'. The output is a more accurate and logically sound reasoning chain from the LLM.
No commits in the last 6 months.
Use this if you are a developer or researcher working with large language models and need to enhance their ability to perform complex, zero-shot reasoning tasks more reliably and accurately.
Not ideal if you are looking for a pre-packaged application for end-users, as this is a research framework for improving LLM reasoning, not a direct user-facing tool.
Stars
30
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xf-zhao/LoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InternLM/SIM-CoT
[ICLR 2026] An official implementation of "SIM-CoT: Supervised Implicit Chain-of-Thought"
zhenyi4/codi
Official repository for "CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation"
nicolay-r/Reasoning-for-Sentiment-Analysis-Framework
The official code for CoT / ZSL reasoning framework 🧠, utilized in paper: "Large Language Models...
FranxYao/FlanT5-CoT-Specialization
Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.
KomeijiForce/CoTAM
Official Implementation of the ACL2024 Findings paper "Controllable Data Augmentation for...