CoinCheung/gdGPT
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
This tool helps AI engineers and researchers efficiently train or fine-tune large language models (LLMs) on their own data. It takes raw text or conversational data in JSON format and outputs a custom-trained LLM ready for specific tasks like pretraining, instruction following, or multi-round conversations. It's designed for professionals working with deep learning and large-scale model development.
No commits in the last 6 months.
Use this if you need to train large language models like Llama or Mixtral faster and with less memory than other common methods, especially when using multiple GPUs on a single node.
Not ideal if you are looking for an off-the-shelf solution for using LLMs without needing to perform custom training or fine-tuning, or if you only have access to a single GPU with limited memory.
Stars
97
Forks
10
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/CoinCheung/gdGPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Lightning-AI/litgpt
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
liangyuwang/Tiny-DeepSpeed
Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library
catherinesyeh/attention-viz
Visualizing query-key interactions in language + vision transformers (VIS 2023)
microsoft/Text2Grad
🚀 Text2Grad: Converting natural language feedback into gradient signals for precise model...
huangjia2019/llm-gpt
From classic NLP to modern LLMs: building language models step by step. 异æ¥å›¾ä¹¦ï¼šã€Š GPT图解 å¤§æ¨¡åž‹æ˜¯æ€Žæ ·æž„å»ºçš„ã€‹-...