ShinoharaHare/LLM-Training

A distributed training framework for large language models powered by Lightning.

37
/ 100
Emerging

This framework helps machine learning practitioners efficiently train large language models for various tasks like pre-training, instruction tuning, or alignment methods such as DPO and ORPO. You provide raw text data and configuration settings, and it outputs a trained language model ready for deployment or further fine-tuning. It's designed for ML engineers or researchers working with large-scale text generation and understanding models.

No commits in the last 6 months.

Use this if you need to train or fine-tune large, GPT-like text models on large datasets across multiple machines, requiring advanced distributed training features like tensor parallelism or data packing.

Not ideal if you are working with non-text models, or if you prefer a simpler, less customizable solution for smaller models or single-GPU training.

large-language-models model-training deep-learning-engineering natural-language-processing distributed-computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

24

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Jul 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ShinoharaHare/LLM-Training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.