ChatGLM-Tuning and ChatGLM-Finetuning
These are competitors offering overlapping LoRA-based fine-tuning solutions for ChatGLM models, with B providing broader method coverage (Freeze, P-tuning, full parameter tuning) compared to A's LoRA-focused approach.
About ChatGLM-Tuning
mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
This project offers an affordable way to customize a large language model like ChatGLM-6B, making it perform specific tasks better. You provide your own text data, and it trains the model to generate more relevant and accurate responses for your particular use case. This is for machine learning practitioners or researchers who need to adapt a pre-trained model without extensive computational resources.
About ChatGLM-Finetuning
liucongg/ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
This project helps developers adapt large language models (LLMs) like ChatGLM to specific tasks such as information extraction, text generation, or classification. It takes a pre-trained ChatGLM model and your task-specific data, then outputs a specialized model that performs much better on your particular workflow. This is for machine learning engineers and researchers who need to customize powerful LLMs without starting from scratch.
Scores updated daily from GitHub, PyPI, and npm data. How scores work