UbiquitousLearning/mllm

Fast Multimodal LLM on Mobile Devices

64
/ 100
Established

This project helps mobile application developers integrate advanced AI capabilities directly into their apps, allowing them to process complex data like images and text on users' devices. It takes large language models and multimodal models, optimizes them, and outputs highly efficient versions that run quickly on mobile phones and edge devices. App developers building on Android or other edge platforms who need on-device AI for things like real-time image analysis or smart text processing would use this.

1,429 stars. Actively maintained with 19 commits in the last 30 days.

Use this if you are a mobile app developer who needs to run large language models or multimodal AI models directly on user devices, ensuring fast performance without relying on cloud servers.

Not ideal if you are developing AI models for cloud-based deployment or desktop environments, or if your application does not require on-device processing for AI inference.

mobile-app-development on-device-ai edge-ai multimodal-ai machine-learning-deployment
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

1,429

Forks

175

Language

C++

License

MIT

Last pushed

Mar 07, 2026

Commits (30d)

19

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UbiquitousLearning/mllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.