trymirai/uzu-ts
A high-performance inference engine for AI models
This project helps application developers integrate AI models directly into their applications, eliminating reliance on cloud-based inference. It takes a pre-trained AI model and user input (like text for chat or summarization) and outputs the model's response directly within the application, offering zero latency and full data privacy. Developers building applications on Apple Silicon devices who need to embed AI capabilities without cloud dependencies would use this.
Use this if you are an application developer building on Apple Silicon and want to embed AI model inference directly into your application for performance, privacy, and cost control.
Not ideal if you need to train AI models, are developing on non-Apple Silicon hardware, or are comfortable using cloud-based AI inference services.
Stars
8
Forks
1
Language
TypeScript
License
MIT
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/trymirai/uzu-ts"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/multilspy
multilspy is a lsp client library in Python intended to be used to build applications around...
mlc-ai/xgrammar
Fast, Flexible and Portable Structured Generation
vicentereig/dspy.rb
The Ruby framework for programming—rather than prompting—language models.
feenkcom/gt4llm
A GT package for working with LLMs
Evref-BL/Pharo-LLMAPI
Use LLM API from Pharo