wxhcore/bumblecore

An LLM training framework built from the ground up, featuring a custom BumbleBee architecture and end-to-end support for multiple open-source models across Pretraining → SFT → RLHF/DPO.

49
/ 100
Emerging

This framework helps deep learning researchers and algorithm engineers build and train large language models (LLMs) from the ground up. It takes raw text data or instruction-formatted datasets and produces a custom-trained LLM ready for specific applications, giving the user complete control over every aspect of the training process. Learners who want to understand LLM internals would also find this useful.

Use this if you need fine-grained control over the entire large language model training pipeline, from architecture design to specific loss functions, and want to deeply customize existing models or create new ones.

Not ideal if you prefer high-level APIs or pre-built solutions for quick, off-the-shelf LLM fine-tuning without diving into the underlying implementation details.

large-language-models deep-learning-research natural-language-processing model-customization algorithm-development
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 18 / 25

How are scores calculated?

Stars

63

Forks

13

Language

Python

License

Apache-2.0

Last pushed

Feb 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/wxhcore/bumblecore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.