rishi-raj-jain/sse-streaming-llm-response
Using Server-Sent Events (SSE) to stream LLM responses in Next.js
This project helps web developers create chat-like interfaces where large language model (LLM) responses appear instantly, word by word, instead of waiting for the full reply. It takes an LLM's full response and breaks it down into a stream of smaller updates that a Next.js web application can display as they arrive. Web developers building AI-powered applications would use this to enhance user experience.
No commits in the last 6 months.
Use this if you are a web developer building a Next.js application and want to display LLM responses as they are generated, similar to a real-time chat.
Not ideal if you are not a web developer or if your application doesn't require real-time streaming updates from an LLM.
Stars
10
Forks
5
Language
TypeScript
License
—
Category
Last pushed
May 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/rishi-raj-jain/sse-streaming-llm-response"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Azure-Samples/serverless-chat-langchainjs
Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js,...
Dcup-dev/dcup
Dcup - Advanced RAG for Personal Knowledge ☕
Cocolalilal/LastChat
A Fork of Rikkahub with an overhauled UI and feature additions
crawlchat/crawlchat
Turn your documentation into an AI assistant that answers questions instantly
GitHamza0206/simba
OpenSource Production ready Customer service with built in Evals and monitoring