MadhanMohanReddy2301/gemma-Instruct-2b-Finetuning-on-alpaca

This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.

11
/ 100
Experimental

No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 0 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MadhanMohanReddy2301/gemma-Instruct-2b-Finetuning-on-alpaca"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.