rishi-raj-jain/sse-streaming-llm-response

Using Server-Sent Events (SSE) to stream LLM responses in Next.js

28
/ 100
Experimental

This project helps web developers create chat-like interfaces where large language model (LLM) responses appear instantly, word by word, instead of waiting for the full reply. It takes an LLM's full response and breaks it down into a stream of smaller updates that a Next.js web application can display as they arrive. Web developers building AI-powered applications would use this to enhance user experience.

No commits in the last 6 months.

Use this if you are a web developer building a Next.js application and want to display LLM responses as they are generated, similar to a real-time chat.

Not ideal if you are not a web developer or if your application doesn't require real-time streaming updates from an LLM.

web-development AI-applications user-experience real-time-UI Next.js
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

10

Forks

5

Language

TypeScript

License

Last pushed

May 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/rishi-raj-jain/sse-streaming-llm-response"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.