yinzhangyue/EoT

Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication

30
/ 100
Emerging

This project helps AI researchers and developers improve the problem-solving accuracy of large language models (LLMs) on complex reasoning tasks. It takes initial LLM reasoning outcomes for a task (like math problems or common-sense questions) and processes them through a simulated 'exchange of thought' among multiple LLM instances. The output is a more refined and accurate final answer, enhanced by cross-model communication and confidence evaluation.

No commits in the last 6 months.

Use this if you are an AI researcher or developer working with large language models and need to enhance their performance on challenging reasoning tasks by enabling them to 'collaborate' and refine their answers.

Not ideal if you are looking for a plug-and-play solution for general content generation or if you don't have the technical expertise to set up and run a Python-based LLM experiment.

AI-research LLM-fine-tuning reasoning-tasks natural-language-processing model-evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

21

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Mar 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yinzhangyue/EoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.