JHubi1/ollama-app
A modern and easy-to-use client for Ollama
This tool provides a user-friendly interface to interact with your local, private AI models. You input prompts or questions, and the application displays responses generated by an Ollama server running on your network. Anyone who wants to use large language models locally and privately, without sending data to external services, would benefit from this.
1,670 stars.
Use this if you want to chat with AI models you host yourself, ensuring your conversations remain entirely private within your local network.
Not ideal if you're looking for a cloud-based AI solution or do not have an Ollama server already set up.
Stars
1,670
Forks
196
Language
Dart
License
Apache-2.0
Category
Last pushed
Oct 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JHubi1/ollama-app"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
mishushakov/llm-scraper
Turn any webpage into structured data using LLMs
Mobile-Artificial-Intelligence/maid
Maid is a free and open source application for interfacing with llama.cpp models locally, and...
run-llama/LlamaIndexTS
Data framework for your LLM applications. Focus on server side solution
serge-chat/serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
yuriwa/crewai-sheets-ui
Use google sheets as a gui for crewAI