NikolasEnt/ollama-webui-intel
Ollama with intel (i)GPU acceleration in docker and benchmark
This project helps individual users and small teams run large language models (LLMs) and vision-language models (VLMs) directly on their personal computers, leveraging Intel integrated or dedicated GPUs for faster performance. It provides a simple setup to get an accelerated version of Ollama with a user-friendly web interface. You can input text or images and receive AI-generated responses, ideal for local experimentation and development.
Use this if you want to run powerful AI models locally on your Intel-powered PC without relying on cloud services, getting faster responses for tasks like text generation or image understanding.
Not ideal if you don't have an Intel GPU, require enterprise-grade stability and support, or are already satisfied with cloud-based AI solutions.
Stars
41
Forks
8
Language
Python
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/NikolasEnt/ollama-webui-intel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...