jiseokson/PageBrain

Light-weight LLM Serving with PagedAttention

17
/ 100
Experimental

This is a tool for developers who are building or running applications that use large language models (LLMs). It helps manage the memory used by LLMs to generate text, making it more efficient to handle multiple user requests simultaneously on a single GPU. It takes in a standard HuggingFace LLM and outputs a more memory-efficient version that can serve many requests.

Use this if you are a developer looking for an educational, hackable reference implementation of modern LLM serving techniques for research or integration into your Python application.

Not ideal if you are an end-user simply looking to interact with an existing LLM or a developer who needs a production-ready, highly optimized LLM serving solution out-of-the-box without wanting to dive into its internals.

LLM-serving GPU-optimization model-deployment AI-infrastructure deep-learning-research
No License No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 5 / 25
Community 0 / 25

How are scores calculated?

Stars

15

Forks

Language

Python

License

Last pushed

Nov 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jiseokson/PageBrain"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.