skye-harris/hass_local_openai_llm
Home Assistant LLM integration for local OpenAI-compatible services (llamacpp, vllm, etc)
This integration allows you to use powerful local language models (LLMs) like Llama.cpp or vLLM directly within your Home Assistant smart home setup. It takes your voice commands or text inputs and uses the local LLM to understand and respond, enabling more intelligent home automation and interactions. Home Assistant users who want advanced, private, and customizable AI for their smart home assistant would use this.
100 stars.
Use this if you want to integrate a private, self-hosted, and customizable large language model directly into your Home Assistant to power conversational AI and automation.
Not ideal if you prefer cloud-based AI services or do not have the technical expertise to set up and manage local LLM inference servers.
Stars
100
Forks
14
Language
Python
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/skye-harris/hass_local_openai_llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langbot-app/LangBot
Production-grade platform for building agentic IM bots - 生产级多平台智能机器人开发平台. 提供 Agent、知识库编排、插件系统 /...
open-webui/open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
cactus-compute/cactus
Low-latency AI engine for mobile devices & wearables
sigoden/aichat
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with...
rudrankriyam/Foundation-Models-Framework-Example
Example apps for Foundation Models Framework in iOS 26 and macOS 26