tlkh/t2t-tuner

Convenient Text-to-Text Training for Transformers

43
/ 100
Emerging

This tool helps machine learning engineers and researchers fine-tune large text-to-text and text generation models for specific applications. It takes your prepared text datasets (like question-answer pairs or text summaries) and trains a transformer model (like T5 or GPT) to perform your desired text task. The output is a specialized model ready for deployment, and it simplifies managing large models on standard GPU setups.

No commits in the last 6 months. Available on PyPI.

Use this if you need to fine-tune large language models for text-to-text or text generation tasks and want a streamlined process that handles memory and performance optimizations.

Not ideal if you are looking for a no-code solution or if your task is not related to text-to-text or text generation.

natural-language-processing large-language-models machine-learning-engineering text-generation model-training
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

19

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 10, 2021

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tlkh/t2t-tuner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.