TinyLLaVA/TinyLLaVA_Factory

A Framework of Small-scale Large Multimodal Models

58
/ 100
Established

This project offers a specialized framework for creating and customizing small-scale Large Multimodal Models (LMMs). It takes raw image and text data, along with configuration choices for language models, vision models, and training methods, to produce a finely tuned LMM. This is for machine learning researchers and practitioners who want to build efficient LMMs without extensive coding.

962 stars. Actively maintained with 1 commit in the last 30 days.

Use this if you are a machine learning researcher or engineer looking to develop or experiment with custom, compact multimodal AI models that can understand both images and text.

Not ideal if you are an end-user simply looking to use an existing multimodal AI model off-the-shelf without any customization or training.

multimodal-ai machine-learning-engineering ai-model-development model-customization computer-vision-nlp
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

962

Forks

96

Language

Python

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TinyLLaVA/TinyLLaVA_Factory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.