ChatGLM-Tuning and ChatGLM-Finetuning

These are competitors offering overlapping LoRA-based fine-tuning solutions for ChatGLM models, with B providing broader method coverage (Freeze, P-tuning, full parameter tuning) compared to A's LoRA-focused approach.

ChatGLM-Tuning
47
Emerging
ChatGLM-Finetuning
39
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 8/25
Community 21/25
Stars: 3,758
Forks: 440
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 2,782
Forks: 312
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About ChatGLM-Tuning

mymusise/ChatGLM-Tuning

基于ChatGLM-6B + LoRA的Fintune方案

This project offers an affordable way to customize a large language model like ChatGLM-6B, making it perform specific tasks better. You provide your own text data, and it trains the model to generate more relevant and accurate responses for your particular use case. This is for machine learning practitioners or researchers who need to adapt a pre-trained model without extensive computational resources.

natural-language-processing large-language-models model-customization AI-research text-generation

About ChatGLM-Finetuning

liucongg/ChatGLM-Finetuning

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

This project helps developers adapt large language models (LLMs) like ChatGLM to specific tasks such as information extraction, text generation, or classification. It takes a pre-trained ChatGLM model and your task-specific data, then outputs a specialized model that performs much better on your particular workflow. This is for machine learning engineers and researchers who need to customize powerful LLMs without starting from scratch.

large-language-models natural-language-processing model-customization text-generation information-extraction

Scores updated daily from GitHub, PyPI, and npm data. How scores work