eqimp/hogwild_llm

Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache

38
/ 100
Emerging

This project offers a method for accelerating the process of generating text with large language models (LLMs). It takes an LLM and prompts as input and delivers generated text much faster than traditional methods, especially when many requests are being processed simultaneously. This tool is designed for AI/ML engineers and researchers who are deploying or experimenting with LLMs in scenarios requiring high throughput.

140 stars. No commits in the last 6 months.

Use this if you need to generate text from large language models more quickly, especially when handling multiple simultaneous requests or batches.

Not ideal if you are looking for a pre-trained LLM or a low-code solution for basic text generation without performance optimization concerns.

LLM deployment AI inference optimization text generation performance large language model research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

140

Forks

9

Language

Python

License

Apache-2.0

Last pushed

Aug 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eqimp/hogwild_llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.