Spico197/MoE-SFT

🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts

23
/ 100
Experimental

This project helps machine learning engineers and researchers optimize how they train Mixture-of-Experts (MoE) models. It takes various instruction datasets (like creative writing, coding, or math problems) and intelligently adjusts which data the model sees during training. The output is a more efficient and better-performing MoE model for a range of tasks.

No commits in the last 6 months.

Use this if you are training Mixture-of-Experts models and want to improve their performance and efficiency by dynamically managing your training data.

Not ideal if you are working with standard, non-MoE large language models or do not have access to diverse, instruction-tuned datasets.

AI model training large language models machine learning research model optimization natural language processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

41

Forks

Language

Python

License

Apache-2.0

Last pushed

Sep 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Spico197/MoE-SFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.