xyjigsaw/LLM-Pretrain-SFT
Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)
This project helps machine learning engineers adapt existing large language models (LLMs) to new domains or specific tasks. It takes a pre-trained LLM and either a collection of plain text documents (for continued pre-training) or structured instruction-output pairs (for fine-tuning). The output is a specialized LLM ready for deployment that performs better on your specific data or task. This is for machine learning engineers, data scientists, and AI researchers who work with large language models.
No commits in the last 6 months.
Use this if you need to customize the knowledge or behavior of models like LLaMA, Baichuan, or Mistral for a particular application, such as a customer service chatbot, a specialized content generator, or a nuanced text classifier.
Not ideal if you are looking for a no-code solution or if you are not comfortable with command-line operations and managing deep learning environments.
Stars
87
Forks
16
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xyjigsaw/LLM-Pretrain-SFT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.