UbiquitousLearning/mllm
Fast Multimodal LLM on Mobile Devices
This project helps mobile application developers integrate advanced AI capabilities directly into their apps, allowing them to process complex data like images and text on users' devices. It takes large language models and multimodal models, optimizes them, and outputs highly efficient versions that run quickly on mobile phones and edge devices. App developers building on Android or other edge platforms who need on-device AI for things like real-time image analysis or smart text processing would use this.
1,429 stars. Actively maintained with 19 commits in the last 30 days.
Use this if you are a mobile app developer who needs to run large language models or multimodal AI models directly on user devices, ensuring fast performance without relying on cloud servers.
Not ideal if you are developing AI models for cloud-based deployment or desktop environments, or if your application does not require on-device processing for AI inference.
Stars
1,429
Forks
175
Language
C++
License
MIT
Category
Last pushed
Mar 07, 2026
Commits (30d)
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UbiquitousLearning/mllm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.