VectorInstitute/vectorlm

LLM finetuning in resource-constrained environments.

41
/ 100
Emerging

This package helps AI engineers and researchers efficiently fine-tune medium-sized large language models (up to 13 billion parameters) on computing clusters with limited resources or interconnectivity. You provide your text dataset, and it produces a fine-tuned LLM ready for use. It's designed for those working in academic or resource-constrained environments who need to optimize model training throughput.

No commits in the last 6 months.

Use this if you are an AI engineer or researcher needing to fine-tune a large language model (up to approximately 13 billion parameters) on a GPU cluster where resources or interconnect speeds are a bottleneck.

Not ideal if you are looking to train extremely large models that require complex 3D distributed training strategies or if you are not working with HuggingFace models and PyTorch.

LLM fine-tuning AI research distributed training GPU optimization academic computing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

55

Forks

11

Language

Python

License

MIT

Category

llm-fine-tuning

Last pushed

Jun 24, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VectorInstitute/vectorlm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.