unit-mesh/edge-infer

EdgeInfer enables efficient edge intelligence by running small AI models, including embeddings and OnnxModels, on resource-constrained devices like Android, iOS, or MCUs for real-time decision-making. EdgeInfer 旨在资源受限的设备上运行小型 AI 模型(包括向量化和 Onnx 模型),如 Android、iOS 或 MCUs,实现高效的边缘智能,用于实时决策。

31
/ 100
Emerging

This project helps developers integrate small AI models directly onto everyday devices like phones, tablets, or even tiny microcontrollers. It takes pre-trained AI models, such as those for understanding text or recognizing objects, and optimizes them to run efficiently on devices with limited computing power. This is for software developers building applications that need real-time AI capabilities without relying on cloud services.

No commits in the last 6 months.

Use this if you are a software developer building applications where AI models need to run directly on user devices for immediate insights, offline functionality, or data privacy.

Not ideal if you need to run very large, complex AI models that require significant computational resources, or if your application can rely on powerful cloud-based AI services.

mobile-app-development embedded-systems on-device-AI real-time-inference edge-computing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

50

Forks

3

Language

Rust

License

MIT

Last pushed

Apr 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/unit-mesh/edge-infer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.