galilai-group/stable-pretraining

Reliable, minimal and scalable library for pretraining foundation and world models

56
/ 100
Established

This project helps machine learning engineers and researchers efficiently train 'foundation models' using self-supervised learning techniques. It takes raw, unlabeled datasets (like images or text) and produces highly effective, general-purpose models that can then be adapted for many specific tasks. The core value is providing real-time visibility into the model's learning process, helping to quickly identify and fix issues.

133 stars.

Use this if you are a machine learning engineer or researcher focused on developing powerful, general-purpose AI models from large, unlabeled datasets using self-supervised methods, and you need robust tools for monitoring and debugging the training process.

Not ideal if you are primarily working with traditional supervised learning tasks, or if you are not deeply involved in the development and pre-training of large-scale foundation models.

foundation-model-training self-supervised-learning representation-learning machine-learning-engineering model-debugging
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

133

Forks

27

Language

Python

License

MIT

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/galilai-group/stable-pretraining"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.