cmavro/PackLLM
Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization
This method helps you combine multiple large language models (LLMs) to get the best possible output for a given text prompt. It takes your input prompt and several different LLMs, then intelligently weighs each model's contribution to produce a more accurate and robust final response. This is for AI practitioners, researchers, or anyone working with LLMs who wants to improve the quality of their generated text.
No commits in the last 6 months.
Use this if you are already using multiple large language models and want to combine their strengths to achieve better performance on specific text generation tasks.
Not ideal if you are looking for a standalone large language model or a tool that trains new models from scratch.
Stars
14
Forks
2
Language
Python
License
—
Category
Last pushed
Apr 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cmavro/PackLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NX-AI/xlstm
Official repository of the xLSTM.
sinanuozdemir/oreilly-hands-on-gpt-llm
Mastering the Art of Scalable and Efficient AI Model Deployment
DashyDashOrg/pandas-llm
Pandas-LLM
wxhcore/bumblecore
An LLM training framework built from the ground up, featuring a custom BumbleBee architecture...
MiniMax-AI/MiniMax-01
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model &...