Ryan-yang125/ChatLLM-Web
š£ļø Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.
This tool allows you to chat with a powerful AI model, similar to ChatGPT, directly within your web browser. You type in questions or prompts, and the AI generates detailed text responses, all while keeping your conversations private and secure on your own device. It's ideal for anyone who wants to interact with advanced AI for writing, brainstorming, or getting information without sending data to external servers.
637 stars. No commits in the last 6 months.
Use this if you need a personal, private AI assistant that runs entirely offline in your browser, ensuring your conversations remain confidential.
Not ideal if you have an older computer without a dedicated GPU or if your browser doesn't support WebGPU, as performance will be severely impacted.
Stars
637
Forks
46
Language
JavaScript
License
MIT
Category
Last pushed
Aug 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Ryan-yang125/ChatLLM-Web"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SillyTavern/SillyTavern
LLM Frontend for Power Users.
libre-webui/libre-webui
Privacy-first web interface for local AI models. Clean, minimal UI for Ollama with extensible...
chyok/ollama-gui
A single-file tkinter-based Ollama GUI project with no external dependencies.
matlab-deep-learning/llms-with-matlab
Connect MATLAB to LLM APIs, including OpenAIĀ® Chat Completions, AzureĀ® OpenAI Services, and Ollamaā¢
ollama4j/ollama4j
A simple Java library for interacting with Ollama server.