arham-kk/gpt2-finetune

Fine tuning a text generation model using the GPT-2 architecture and a csv dataset

32
/ 100
Emerging

This project helps developers adapt a GPT-2 text generation model for specific writing tasks by training it on their own dataset. You provide a CSV file containing the text examples you want the model to learn from, and it outputs a specialized GPT-2 model capable of generating similar text. This is designed for AI/ML engineers or data scientists looking to customize an existing language model.

No commits in the last 6 months.

Use this if you are a developer looking to fine-tune a pre-trained GPT-2 model on a custom text dataset to generate new text in a specific style or domain.

Not ideal if you are a non-technical user simply looking for an out-of-the-box text generation tool without any coding.

text-generation natural-language-processing deep-learning model-training language-model-customization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arham-kk/gpt2-finetune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.