mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
WebLLM helps web developers integrate powerful AI language models directly into their web applications, running entirely within the user's browser. It takes model files and instructions, producing AI-generated text, chat responses, or structured JSON data. Web developers creating interactive, privacy-focused AI experiences for their users would benefit from this.
17,562 stars. Used by 3 other packages. Actively maintained with 8 commits in the last 30 days. Available on npm.
Use this if you are a web developer who wants to build AI assistants, chatbots, or other language model-powered features directly into a website, without needing a server to process AI requests.
Not ideal if you are a data scientist or researcher looking for a Python library for offline model training or complex backend AI inference.
Stars
17,562
Forks
1,221
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
8
Dependencies
1
Reverse dependents
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mlc-ai/web-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Compare
Related tools
e2b-dev/desktop
E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can...
geekjr/quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art...
Azure-Samples/llama-index-javascript
This sample shows how to quickly get started with LlamaIndex.ai on Azure 🚀
AkagawaTsurunaki/zerolan-core
ZerolanCore integrates many open-source, locally deployable AI models, and aims to integrate a...
ParisNeo/lollms
An all in one AI solution compatible with any known AI service on the planet