r2d4/react-llm
Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. Just useLLM().
This project offers tools to embed a large language model (LLM) directly into your website, running entirely within the user's web browser. It takes user text input and generates AI responses, without any data leaving the browser. Web developers who want to add privacy-focused, offline-capable AI chat features to their React applications would use this.
702 stars. No commits in the last 6 months.
Use this if you are a React developer building a web application and need to integrate an AI chatbot that runs locally in the user's browser, prioritizing data privacy and offline capability.
Not ideal if you need to integrate with powerful cloud-based LLMs, or if you are not developing a React application.
Stars
702
Forks
32
Language
TypeScript
License
MIT
Category
Last pushed
Jun 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/r2d4/react-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
e2b-dev/desktop
E2B Desktop Sandbox for LLMs. E2B Sandbox with desktop graphical environment that you can...
geekjr/quickai
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art...
Azure-Samples/llama-index-javascript
This sample shows how to quickly get started with LlamaIndex.ai on Azure 🚀
AkagawaTsurunaki/zerolan-core
ZerolanCore integrates many open-source, locally deployable AI models, and aims to integrate a...