A-baoYang/alpaca-7b-chinese

Finetune LLaMA-7B with Chinese instruction datasets

42
/ 100
Emerging

This project helps AI developers fine-tune large language models (LLMs) like LLaMA-7B or BLOOM for specific applications using Chinese instruction datasets. You input an existing LLM and a Chinese dataset, and it outputs a more specialized LLM capable of complex Chinese NLP tasks such as summarization, question answering, or text generation. This is for AI engineers or data scientists who need to adapt a general LLM to perform well on tasks requiring Chinese language understanding and generation.

137 stars. No commits in the last 6 months.

Use this if you are an AI developer looking to customize a LLaMA-7B or BLOOM large language model to perform better on specific Chinese natural language processing tasks with limited GPU resources.

Not ideal if you are a non-developer or need an out-of-the-box solution without custom model training.

natural-language-processing machine-learning-engineering chinese-language-ai large-language-model-customization ai-model-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

137

Forks

18

Language

Python

License

CC0-1.0

Category

llm-fine-tuning

Last pushed

May 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/A-baoYang/alpaca-7b-chinese"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.