VectorInstitute/vectorlm
LLM finetuning in resource-constrained environments.
This package helps AI engineers and researchers efficiently fine-tune medium-sized large language models (up to 13 billion parameters) on computing clusters with limited resources or interconnectivity. You provide your text dataset, and it produces a fine-tuned LLM ready for use. It's designed for those working in academic or resource-constrained environments who need to optimize model training throughput.
No commits in the last 6 months.
Use this if you are an AI engineer or researcher needing to fine-tune a large language model (up to approximately 13 billion parameters) on a GPU cluster where resources or interconnect speeds are a bottleneck.
Not ideal if you are looking to train extremely large models that require complex 3D distributed training strategies or if you are not working with HuggingFace models and PyTorch.
Stars
55
Forks
11
Language
Python
License
MIT
Category
Last pushed
Jun 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VectorInstitute/vectorlm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.