eqimp/hogwild_llm
Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache
This project offers a method for accelerating the process of generating text with large language models (LLMs). It takes an LLM and prompts as input and delivers generated text much faster than traditional methods, especially when many requests are being processed simultaneously. This tool is designed for AI/ML engineers and researchers who are deploying or experimenting with LLMs in scenarios requiring high throughput.
140 stars. No commits in the last 6 months.
Use this if you need to generate text from large language models more quickly, especially when handling multiple simultaneous requests or batches.
Not ideal if you are looking for a pre-trained LLM or a low-code solution for basic text generation without performance optimization concerns.
Stars
140
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eqimp/hogwild_llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quic/efficient-transformers
This library empowers users to seamlessly port pretrained models and checkpoints on the...
ManuelSLemos/RabbitLLM
Run 70B+ LLMs on a single 4GB GPU — no quantization required.
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
arm-education/Advanced-AI-Hardware-Software-Co-Design
Hands-on course materials for ML engineers to master extreme model quantization and on-device...
IST-DASLab/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes...