WangRongsheng/Chinese-LLaMA-Alpaca-Usage
📔 对Chinese-LLaMA-Alpaca进行使用说明和核心代码注解
This project helps AI developers and researchers adapt large language models (LLMs) to better understand and generate Chinese text. It provides instructions and scripts to prepare data, merge specialized Chinese vocabulary, and either pre-train or fine-tune existing LLaMA models. The process takes LLaMA model weights and Chinese text data as input, producing an enhanced LLaMA model capable of superior Chinese language processing.
No commits in the last 6 months.
Use this if you need to customize a LLaMA-based large language model to perform exceptionally well with Chinese text for tasks like natural language understanding, text generation, or chatbot development.
Not ideal if you are looking for a pre-built, ready-to-use Chinese LLaMA model without any customization, or if you don't have access to the necessary computational resources for model training.
Stars
51
Forks
7
Language
Jupyter Notebook
License
—
Category
Last pushed
May 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/WangRongsheng/Chinese-LLaMA-Alpaca-Usage"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shibing624/MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline....
lyogavin/airllm
AirLLM 70B inference with single 4GB GPU
GradientHQ/parallax
Parallax is a distributed model serving framework that lets you build your own AI cluster anywhere
CrazyBoyM/llama3-Chinese-chat
Llama3、Llama3.1 中文后训练版仓库 - 微调、魔改版本有趣权重 & 训练、推理、评测、部署教程视频 & 文档。
CLUEbenchmark/CLUE
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained...