chu2bard/eventpipe
Server-sent events streaming library for LLM responses
This helps developers who integrate Large Language Models (LLMs) into their applications to manage the continuous stream of responses. It takes in raw, real-time response streams from services like OpenAI, Anthropic, or Google, and provides a unified, easier-to-handle flow of individual tokens. This simplifies showing LLM responses to users as they are generated, rather than waiting for the entire response.
Use this if you are a developer building an application that needs to display LLM responses to users in real-time as they are being generated, rather than all at once.
Not ideal if you are looking for a tool to manage the LLM models themselves, orchestrate complex prompts, or if your application doesn't require real-time streaming of responses.
Stars
16
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/chu2bard/eventpipe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mangiucugna/json_repair
A python module to repair invalid JSON from LLMs
antfu/shiki-stream
Streaming highlighting with Shiki. Useful for highlighting text streams like LLM outputs.
iw4p/partialjson
+1M Downloads! Repair invalid LLM JSON, commonly used to parse the output of LLMs — Parsing...
yokingma/fetch-sse
An easy API for making Event Source requests, with all the features of fetch(), Supports...
kaptinlin/jsonrepair
A high-performance Golang library for easily repairing invalid JSON documents. Designed to fix...