st3w4r/openai-partial-stream

Turn a stream of token into a parsable JSON object as soon as possible. Enable Streaming UI for AI app based on LLM.

28
/ 100
Experimental

This project helps developers build AI applications that feel fast and responsive. It takes fragmented pieces of information coming from an AI model (like OpenAI's) and assembles them into a usable data structure as quickly as possible. The result is that users of AI applications get to see partial, but complete, information much sooner, making their experience more engaging. This is for developers creating AI-powered user interfaces.

122 stars. No commits in the last 6 months.

Use this if you are a developer building an AI application and want to provide a real-time, streaming user experience rather than making users wait for the full AI response.

Not ideal if your application doesn't require real-time updates and can wait for the AI to return a complete response before displaying anything.

AI-application-development user-experience real-time-data streaming-UI front-end-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 2 / 25

How are scores calculated?

Stars

122

Forks

1

Language

HTML

License

MIT

Last pushed

Jun 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/st3w4r/openai-partial-stream"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.