Uminosachi/open-llm-webui
This repository contains a web application designed to execute relatively compact, locally-operated Large Language Models (LLMs).
This web application helps you run various smaller Large Language Models (LLMs) directly on your own computer, making them accessible through a simple web browser interface. You input your questions or prompts into the browser, and the LLM processes them to generate responses, similar to using a chatbot. This tool is for individual users, researchers, or small teams who want to experiment with different LLMs without needing extensive technical setup or relying on cloud services.
Use this if you want to experiment with or use different smaller AI chat models on your local machine via a user-friendly interface, keeping your data private and avoiding external API costs.
Not ideal if you need to run very large, cutting-edge LLMs that require substantial computational power not typically available on a personal computer, or if you need robust, enterprise-grade deployment features.
Stars
47
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Uminosachi/open-llm-webui"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mishushakov/llm-scraper
Turn any webpage into structured data using LLMs
Mobile-Artificial-Intelligence/maid
Maid is a free and open source application for interfacing with llama.cpp models locally, and...
run-llama/LlamaIndexTS
Data framework for your LLM applications. Focus on server side solution
JHubi1/ollama-app
A modern and easy-to-use client for Ollama
serge-chat/serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.