kyegomez/LIMoE

Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts"

48
/ 100
Emerging

This project helps machine learning engineers or researchers build advanced AI models that understand both text and images simultaneously. It takes in textual data and corresponding images, then processes them to identify relationships and meaning across both types of information. The output is a sophisticated model capable of complex multimodal analysis, useful for tasks like enhanced search or content understanding.

Available on PyPI.

Use this if you are a machine learning engineer building cutting-edge multimodal AI models and need an efficient way to combine language and image understanding.

Not ideal if you are looking for an out-of-the-box solution for a specific application without any machine learning development.

multimodal-AI-development machine-learning-research computer-vision natural-language-processing AI-model-building
Maintenance 10 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

36

Forks

2

Language

Python

License

MIT

Last pushed

Jan 31, 2026

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/LIMoE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.