av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
Harbor helps developers quickly set up a complete local environment for building and experimenting with Large Language Models (LLMs). With a single command, you get pre-configured LLM backends like Ollama and frontends like Open WebUI, plus supporting tools for web search, voice interaction, and image generation. This is ideal for AI/ML engineers, researchers, or anyone prototyping LLM applications who needs a full local stack without complex manual setup.
2,498 stars. Actively maintained with 83 commits in the last 30 days.
Use this if you need a fully integrated, local development environment for building and testing AI applications with various LLMs and related services without the hassle of individual installations and configurations.
Not ideal if you're looking for a cloud-based solution or a simple API to access pre-trained models without needing a local development stack.
Stars
2,498
Forks
168
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
83
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/av/harbor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)
FarisZahrani/llama-cpp-py-sync
Auto-synced CFFI ABI python bindings for llama.cpp with prebuilt wheels (CPU/CUDA/Vulkan/Metal).