declare-lab/Auto-Scaling

[Arxiv 2024] Official Implementation of the paper: "Towards Robust Instruction Tuning on Multimodal Large Language Models"

35
/ 100
Emerging

This tool helps AI researchers and practitioners improve the performance of their Multimodal Large Language Models (MLLMs) by automatically generating a much larger set of training instructions from a small initial set. It takes existing instruction datasets and expands them up to 30 times, producing a more robust and diverse dataset for model fine-tuning. This is for AI/ML researchers and data scientists who train and fine-tune advanced AI models.

Use this if you need to fine-tune Multimodal Large Language Models for better performance but have a limited amount of instruction data.

Not ideal if you are not working with multimodal large language models or instruction tuning, or if you already have sufficiently large and diverse instruction datasets.

AI-model-training LLM-fine-tuning data-augmentation multimodal-AI natural-language-processing
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/declare-lab/Auto-Scaling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.