wangcongcong123/ttt

A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+

35
/ 100
Emerging

This project helps machine learning engineers efficiently train large language models, specifically Transformer models, for various natural language processing tasks. It takes raw text data or pre-tokenized inputs and fine-tunes models like BERT or T5 to produce specialized models for tasks such as text classification, translation, or summarization. It's designed for users who work with large datasets and require powerful computational resources like TPUs or GPUs to accelerate model training.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher needing to fine-tune Transformer-based models on large datasets using Google's TPUs or powerful GPUs.

Not ideal if you are looking for a simple, low-code solution for NLP tasks or if your work does not involve training custom deep learning models.

natural-language-processing deep-learning-engineering text-classification machine-translation text-summarization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

37

Forks

5

Language

Python

License

MIT

Last pushed

Mar 10, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/wangcongcong123/ttt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.