kyegomez/LIMoE
Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts"
This project helps machine learning engineers or researchers build advanced AI models that understand both text and images simultaneously. It takes in textual data and corresponding images, then processes them to identify relationships and meaning across both types of information. The output is a sophisticated model capable of complex multimodal analysis, useful for tasks like enhanced search or content understanding.
Available on PyPI.
Use this if you are a machine learning engineer building cutting-edge multimodal AI models and need an efficient way to combine language and image understanding.
Not ideal if you are looking for an out-of-the-box solution for a specific application without any machine learning development.
Stars
36
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jan 31, 2026
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/LIMoE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
dohlee/chromoformer
The official code implementation for Chromoformer in PyTorch. (Lee et al., Nature Communications. 2022)
ahans30/goldfish-loss
[NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs
yinboc/trans-inr
Transformers as Meta-Learners for Implicit Neural Representations, in ECCV 2022
bloomberg/MixCE-acl2023
Implementation of MixCE method described in ACL 2023 paper by Zhang et al.
ibnaleem/mixtral.py
A Python module for running the Mixtral-8x7B language model with customisable precision and...