taishan1994/pytorch-distributed-NLP
pytorch分布式训练
This project helps machine learning engineers or researchers accelerate the training of natural language processing models, specifically for Chinese text classification tasks. It takes in Chinese text data and a text classification model, distributing the workload across multiple GPUs on a single machine to produce a trained model much faster than with a single GPU. The primary user is a deep learning practitioner working with large NLP datasets.
No commits in the last 6 months.
Use this if you are training a Chinese text classification model using PyTorch and have access to multiple GPUs on a single machine, wanting to significantly reduce training time.
Not ideal if you are working with non-NLP tasks, do not have multiple GPUs, or need to train models across multiple machines (distributed training beyond a single node).
Stars
74
Forks
16
Language
Python
License
—
Category
Last pushed
Jul 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/taishan1994/pytorch-distributed-NLP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
codertimo/BERT-pytorch
Google AI 2018 BERT pytorch implementation
JayYip/m3tl
BERT for Multitask Learning
920232796/bert_seq2seq
pytorch实现 Bert 做seq2seq任务,使用unilm方案,现在也可以做自动摘要,文本分类,情感分析,NER,词性标注等任务,支持t5模型,支持GPT2进行文章续写。
sileod/tasknet
Easy modernBERT fine-tuning and multi-task learning
graykode/toeicbert
TOEIC(Test of English for International Communication) solving using pytorch-pretrained-BERT model.