web-llm and llm-x

These are complements: web-llm provides the core in-browser inference engine that llm-x uses as its underlying LLM runtime to deliver a user-friendly web interface for local model execution.

web-llm
73
Verified
llm-x
49
Emerging
Maintenance 17/25
Adoption 13/25
Maturity 25/25
Community 18/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 17/25
Stars: 17,562
Forks: 1,221
Downloads:
Commits (30d): 8
Language: TypeScript
License: Apache-2.0
Stars: 292
Forks: 33
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No risk flags
No Package No Dependents

About web-llm

mlc-ai/web-llm

High-performance In-browser LLM Inference Engine

WebLLM helps web developers integrate powerful AI language models directly into their web applications, running entirely within the user's browser. It takes model files and instructions, producing AI-generated text, chat responses, or structured JSON data. Web developers creating interactive, privacy-focused AI experiences for their users would benefit from this.

web-development in-browser-ai client-side-llm web-application-development interactive-ai

About llm-x

mrdjohnson/llm-x

LLMX; Easiest 3rd party Local LLM UI for the web!

This tool helps you interact with various AI models on your own computer, keeping your conversations and generated content completely private. You can input text or images to get text responses or generate new images, all without sending your data to external servers. It's designed for anyone who wants to use AI for tasks like drafting text, getting explanations, or creating images while ensuring maximum privacy.

personal-productivity private-data-processing content-creation local-ai-operations secure-chat

Scores updated daily from GitHub, PyPI, and npm data. How scores work