NikolasEnt/ollama-webui-intel

Ollama with intel (i)GPU acceleration in docker and benchmark

42
/ 100
Emerging

This project helps individual users and small teams run large language models (LLMs) and vision-language models (VLMs) directly on their personal computers, leveraging Intel integrated or dedicated GPUs for faster performance. It provides a simple setup to get an accelerated version of Ollama with a user-friendly web interface. You can input text or images and receive AI-generated responses, ideal for local experimentation and development.

Use this if you want to run powerful AI models locally on your Intel-powered PC without relying on cloud services, getting faster responses for tasks like text generation or image understanding.

Not ideal if you don't have an Intel GPU, require enterprise-grade stability and support, or are already satisfied with cloud-based AI solutions.

local-AI personal-AI-assistant home-AI-experimentation offline-LLM Intel-GPU-acceleration
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

41

Forks

8

Language

Python

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/NikolasEnt/ollama-webui-intel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.