adamydwang/mobilellama

a lightweight C++ LLaMA inference engine for mobile devices

22
/ 100
Experimental

MobileLLaMA is a specialized tool for running LLaMA-based AI models directly on mobile devices. It takes a pre-trained LLaMA model and enables its use within mobile applications, allowing for on-device AI capabilities without needing a constant internet connection. This is for mobile app developers who want to integrate advanced language AI into their applications.

No commits in the last 6 months.

Use this if you are developing mobile applications and need to embed lightweight LLaMA AI model inference directly onto users' devices.

Not ideal if you are looking for a general-purpose AI development framework or if you only need to run LLaMA models on servers or desktop computers.

mobile-app-development on-device-AI embedded-AI mobile-machine-learning AI-inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

C++

License

MIT

Last pushed

Oct 28, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/adamydwang/mobilellama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.