mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
This project offers an affordable way to customize a large language model like ChatGLM-6B, making it perform specific tasks better. You provide your own text data, and it trains the model to generate more relevant and accurate responses for your particular use case. This is for machine learning practitioners or researchers who need to adapt a pre-trained model without extensive computational resources.
3,758 stars. No commits in the last 6 months.
Use this if you need to fine-tune a ChatGLM-6B model for a specific domain or task using your own dataset, without the high costs or complexity of training a large model from scratch.
Not ideal if you're looking for a ready-to-use chatbot solution without any model training or technical setup.
Stars
3,758
Forks
440
Language
Python
License
MIT
Category
Last pushed
Nov 25, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mymusise/ChatGLM-Tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hoochanlon/hamuleite
🏔️国立台湾大学、新加坡国立大学、早稻田大学、东京大学,中央研究院(台湾)以及中国重点高校及科研机构,社科、经济、数学、博弈论、哲学、系统工程类学术论文等知识库。
JiauZhang/chatchat
Large Language Models Python API
yuanjie-ai/ChatLLM
轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等
cambrian-mllm/cambrian
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
zai-org/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型