SearchSavior/OpenArc

Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.

48
/ 100
Emerging

OpenArc helps you run various AI models like large language models (LLMs), vision-language models (VLMs), and speech-to-text (Whisper) on your own Intel-powered computer. It takes these models and makes them accessible through an interface similar to OpenAI, allowing you to get text generation, image analysis, audio transcription, or speech synthesis directly from your device. Data scientists, AI researchers, and developers who want to deploy local, private AI solutions on Intel hardware would find this useful.

341 stars.

Use this if you need to run and serve a variety of AI models (LLMs, VLMs, speech, embeddings, rerankers) locally on your Intel CPU, GPU, or NPU devices, with an OpenAI-compatible API.

Not ideal if you prefer cloud-based AI services, or if your primary hardware is not Intel-based.

AI deployment local inference natural language processing computer vision speech recognition
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

341

Forks

18

Language

Python

License

Apache-2.0

Last pushed

Feb 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SearchSavior/OpenArc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.