kyegomez/MobileVLM
Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices"
This project helps machine learning engineers build efficient vision-language AI models that run smoothly on mobile phones or other devices with limited processing power. It takes visual data as input and processes it to extract key features, resulting in a more compact representation that's easier for mobile devices to handle. It's designed for developers creating AI applications for edge devices.
No commits in the last 6 months. Available on PyPI.
Use this if you are a developer building AI models for mobile devices and need to process images efficiently while minimizing computational resources.
Not ideal if you are working with large-scale server-side AI models where computational efficiency on edge devices is not a primary concern.
Stars
15
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2024
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/MobileVLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle