FMInference/FlexLLMGen
Running large language models on a single GPU for throughput-oriented scenarios.
Archived44
/ 100
Emerging
9,380 stars. No commits in the last 6 months.
Archived
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
10 / 25
Maturity
16 / 25
Community
18 / 25
Stars
9,380
Forks
592
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/FMInference/FlexLLMGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelTC/LightCompress
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs,...
64
p-e-w/heretic
Fully automatic censorship removal for language models
62
Orion-zhen/abliteration
Make abliterated models with transformers, easy and fast
54
YerbaPage/LongCodeZip
LongCodeZip: Compress Long Context for Code Language Models [ASE2025]
54
locuslab/wanda
A simple and effective LLM pruning approach.
47