gustavecortal/gpt-j-fine-tuning-example

Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression

35
/ 100
Emerging

This tool helps AI developers customize large language models like GPT-J or GPT-Neo for specific tasks or content styles, even with limited computing power. You provide a general-purpose model and your specialized text data, and it outputs a fine-tuned model that generates text aligned with your unique requirements. This is ideal for machine learning engineers, AI researchers, or data scientists working on custom text generation applications.

No commits in the last 6 months.

Use this if you need to adapt a large language model to generate text on a very specific topic, in a particular style, or using specialized terminology, without requiring extensive GPU resources.

Not ideal if you're looking for a plug-and-play solution for text generation and don't have experience with machine learning model fine-tuning or development.

natural-language-processing large-language-models model-customization text-generation machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

68

Forks

18

Language

Jupyter Notebook

License

Last pushed

Oct 05, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gustavecortal/gpt-j-fine-tuning-example"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.