chu2bard/eventpipe

Server-sent events streaming library for LLM responses

27
/ 100
Experimental

This helps developers who integrate Large Language Models (LLMs) into their applications to manage the continuous stream of responses. It takes in raw, real-time response streams from services like OpenAI, Anthropic, or Google, and provides a unified, easier-to-handle flow of individual tokens. This simplifies showing LLM responses to users as they are generated, rather than waiting for the entire response.

Use this if you are a developer building an application that needs to display LLM responses to users in real-time as they are being generated, rather than all at once.

Not ideal if you are looking for a tool to manage the LLM models themselves, orchestrate complex prompts, or if your application doesn't require real-time streaming of responses.

LLM-application-development real-time-data-streaming API-integration backend-development
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

TypeScript

License

MIT

Last pushed

Feb 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/chu2bard/eventpipe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.