xf-zhao/LoT

Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"

35
/ 100
Emerging

This project helps AI developers and researchers improve how large language models (LLMs) reason, especially for complex, multi-step problems without prior examples. It takes an LLM's initial thought process and applies logical principles to verify and refine each step, reducing errors and 'hallucinations'. The output is a more accurate and logically sound reasoning chain from the LLM.

No commits in the last 6 months.

Use this if you are a developer or researcher working with large language models and need to enhance their ability to perform complex, zero-shot reasoning tasks more reliably and accurately.

Not ideal if you are looking for a pre-packaged application for end-users, as this is a research framework for improving LLM reasoning, not a direct user-facing tool.

AI model development LLM fine-tuning Reasoning improvement Natural language processing Cognitive AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

30

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xf-zhao/LoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.