nadchif/in-browser-llm-inference
Download and run local LLMs within your browser
This helps you run an artificial intelligence chat assistant directly in your web browser, keeping your conversations completely private. You input your questions or prompts into the chat interface, and the AI processes them to generate responses without sending any data to external servers. This is ideal for anyone who wants to use an AI assistant for brainstorming, writing, or getting information while ensuring their interactions remain confidential and offline.
No commits in the last 6 months.
Use this if you need a confidential AI chat assistant that works entirely within your browser, ensuring your data never leaves your device.
Not ideal if you require an AI assistant with access to real-time internet information or if you need to integrate it with other online services.
Stars
24
Forks
4
Language
JavaScript
License
MIT
Category
Last pushed
Sep 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nadchif/in-browser-llm-inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
e2b-dev/desktop
E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can...
geekjr/quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art...
Azure-Samples/llama-index-javascript
This sample shows how to quickly get started with LlamaIndex.ai on Azure 🚀
AkagawaTsurunaki/zerolan-core
ZerolanCore integrates many open-source, locally deployable AI models, and aims to integrate a...