StarRing2022/ChatGPTX-Uni
实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最终在小模型的基座上发生“智能涌现”,力图最小计算代价达成ChatGPT、GPT4、ChatRWKV等人类友好亲和效果。当前可以满足总结、提问、问答、摘要、改写、评论、扮演等各种需求。
This project helps integrate and switch between different large language models (LLMs) like ChatGLM and LLaMA, enhancing their capabilities without extensive retraining. It takes existing LLMs and specialized 'Lora' modules as input, allowing them to collaborate and produce more versatile, human-like text outputs for tasks such as summarizing, Q&A, rewriting, and role-playing. This is ideal for AI practitioners and researchers looking to combine the strengths of various open-source LLMs efficiently.
116 stars. No commits in the last 6 months.
Use this if you want to leverage multiple large language models together, combining their specific strengths for diverse text generation and understanding tasks with minimal computational overhead.
Not ideal if you need to train a large language model from scratch or are not working with existing LLMs and Lora fine-tuning modules.
Stars
116
Forks
10
Language
Python
License
GPL-3.0
Category
Last pushed
Jul 19, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/StarRing2022/ChatGPTX-Uni"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shibing624/MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline....
lyogavin/airllm
AirLLM 70B inference with single 4GB GPU
GradientHQ/parallax
Parallax is a distributed model serving framework that lets you build your own AI cluster anywhere
CrazyBoyM/llama3-Chinese-chat
Llama3、Llama3.1 中文后训练版仓库 - 微调、魔改版本有趣权重 & 训练、推理、评测、部署教程视频 & 文档。
CLUEbenchmark/CLUE
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained...