tib0/local-llama
Local Llama project, L³ is an electron app that runs llama 3 models locally
This desktop application allows you to chat with powerful AI language models directly on your computer, without needing an internet connection or external services. You input your questions or prompts, and the AI generates responses, which you can save for later. It's designed for anyone who wants to use advanced AI chatbots privately and offline.
No commits in the last 6 months.
Use this if you need to interact with large language models and prioritize data privacy, offline access, and full control over the AI's behavior and models.
Not ideal if you prefer using cloud-based AI services or do not have sufficient computer resources to run large AI models locally.
Stars
17
Forks
1
Language
TypeScript
License
—
Category
Last pushed
Jun 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/tib0/local-llama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SillyTavern/SillyTavern
LLM Frontend for Power Users.
libre-webui/libre-webui
Privacy-first web interface for local AI models. Clean, minimal UI for Ollama with extensible...
chyok/ollama-gui
A single-file tkinter-based Ollama GUI project with no external dependencies.
matlab-deep-learning/llms-with-matlab
Connect MATLAB to LLM APIs, including OpenAI® Chat Completions, Azure® OpenAI Services, and Ollama™
ollama4j/ollama4j
A simple Java library for interacting with Ollama server.