speediedan/finetuning-scheduler
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
This tool helps machine learning engineers and researchers accelerate their work when adapting large, pre-trained models for new tasks. It takes a pre-trained model and a dataset, and intelligently adjusts the model's parameters in phases. The output is a fine-tuned model that performs better on specific tasks, achieved more efficiently.
Available on PyPI.
Use this if you are a machine learning practitioner experimenting with fine-tuning large foundation models and need precise control over the training process to improve performance and efficiency.
Not ideal if you are new to deep learning or only performing basic model training without complex fine-tuning strategies.
Stars
70
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 26, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/speediedan/finetuning-scheduler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)