sauravpanda/BrowserAI
Run local LLMs like llama, deepseek-distill, kokoro and more inside your browser
This project helps web developers integrate private, fast, and offline-capable AI features directly into their websites and applications. It takes various pre-trained AI models (like LLMs for text, or models for speech) and runs them entirely within the user's web browser, outputting text, speech, or structured data. Web developers, companies prioritizing user privacy, and no-code platform builders will find this useful for creating AI-powered tools without server costs or complex infrastructure.
1,381 stars.
Use this if you are building a web application and need to add AI capabilities, such as intelligent chat, text generation, or voice commands, with a strong emphasis on user privacy, local processing, and minimal server overhead.
Not ideal if you need to run extremely large AI models that require significant computational power beyond typical browser capabilities or if your application primarily operates outside of a web browser.
Stars
1,381
Forks
137
Language
TypeScript
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sauravpanda/BrowserAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
cel-ai/celai
Open source framework designed to accelerate the development of omnichannel AI virtual assistants.
lone-cloud/gerbil
A desktop app for running Large Language Models locally.
vinjn/llm-metahuman
An open solution for AI-powered photorealistic digital humans.
cztomsik/ava
All-in-one desktop app for running LLMs locally.
snwfdhmp/llm
Use any LLM from the command line.