ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
This project helps customize the pre-trained GLM-4 large language model to better suit your specific needs. You provide your own conversational data, and the system trains the model to respond in a way that's more aligned with your domain or desired style. The output is a fine-tuned GLM-4 model ready for specialized chat or question-answering tasks. This is for AI practitioners or researchers looking to adapt powerful language models for niche applications.
1,537 stars. No commits in the last 6 months.
Use this if you need to tailor a GLM-4 model to generate responses for a specific industry, customer service scenario, or any particular conversation style using your own data.
Not ideal if you simply need an out-of-the-box conversational AI without any custom behavior or specialized knowledge.
Stars
1,537
Forks
173
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ssbuild/chatglm_finetuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.
SmallDoges/small-doge
Doge Family of Small Language Models