xyjigsaw/LLM-Pretrain-SFT

Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)

42
/ 100
Emerging

This project helps machine learning engineers adapt existing large language models (LLMs) to new domains or specific tasks. It takes a pre-trained LLM and either a collection of plain text documents (for continued pre-training) or structured instruction-output pairs (for fine-tuning). The output is a specialized LLM ready for deployment that performs better on your specific data or task. This is for machine learning engineers, data scientists, and AI researchers who work with large language models.

No commits in the last 6 months.

Use this if you need to customize the knowledge or behavior of models like LLaMA, Baichuan, or Mistral for a particular application, such as a customer service chatbot, a specialized content generator, or a nuanced text classifier.

Not ideal if you are looking for a no-code solution or if you are not comfortable with command-line operations and managing deep learning environments.

large-language-models model-fine-tuning natural-language-processing deep-learning-customization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

87

Forks

16

Language

Python

License

Apache-2.0

Last pushed

Jan 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xyjigsaw/LLM-Pretrain-SFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.