yinzhangyue/EoT
Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication
This project helps AI researchers and developers improve the problem-solving accuracy of large language models (LLMs) on complex reasoning tasks. It takes initial LLM reasoning outcomes for a task (like math problems or common-sense questions) and processes them through a simulated 'exchange of thought' among multiple LLM instances. The output is a more refined and accurate final answer, enhanced by cross-model communication and confidence evaluation.
No commits in the last 6 months.
Use this if you are an AI researcher or developer working with large language models and need to enhance their performance on challenging reasoning tasks by enabling them to 'collaborate' and refine their answers.
Not ideal if you are looking for a plug-and-play solution for general content generation or if you don't have the technical expertise to set up and run a Python-based LLM experiment.
Stars
21
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 21, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yinzhangyue/EoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase