tlkh/t2t-tuner
Convenient Text-to-Text Training for Transformers
This tool helps machine learning engineers and researchers fine-tune large text-to-text and text generation models for specific applications. It takes your prepared text datasets (like question-answer pairs or text summaries) and trains a transformer model (like T5 or GPT) to perform your desired text task. The output is a specialized model ready for deployment, and it simplifies managing large models on standard GPU setups.
No commits in the last 6 months. Available on PyPI.
Use this if you need to fine-tune large language models for text-to-text or text generation tasks and want a streamlined process that handles memory and performance optimizations.
Not ideal if you are looking for a no-code solution or if your task is not related to text-to-text or text generation.
Stars
19
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 10, 2021
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tlkh/t2t-tuner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
worldbank/REaLTabFormer
A suite of auto-regressive and Seq2Seq (sequence-to-sequence) transformer models for tabular and...
MagedSaeed/generate-sequences
A python package made to generate sequences (greedy and beam-search) from Pytorch (not...
NohTow/PPL-MCTS
Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through...
styfeng/TinyDialogues
Code & data for the EMNLP 2024 paper: Is Child-Directed Speech Effective Training Data for...
readme-generator/alreadyme-ai-serving
Serving large language model with transformers