LowinLi/transformers-stream-generator

This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.

58
/ 100
Established

This is a tool for developers who are building applications that use large language models (LLMs) to generate text. It helps make the text generation process feel much faster and more interactive for the end-user. Developers feed in their existing Hugging Face Transformers model and an initial prompt, and it outputs a stream of text tokens as they are generated, rather than waiting for the entire response.

Used by 10 other packages. No commits in the last 6 months. Available on PyPI.

Use this if you are a developer building an application where users are waiting for AI-generated text and you want to improve their experience by showing output word-by-word.

Not ideal if you are an end-user looking for a ready-to-use application, or if you only need the final, complete text output at once.

AI application development LLM integration User experience design Real-time text generation Developer tooling
Stale 6m
Maintenance 0 / 25
Adoption 14 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

96

Forks

19

Language

Python

License

MIT

Last pushed

Mar 11, 2024

Commits (30d)

0

Reverse dependents

10

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/LowinLi/transformers-stream-generator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.