Model Compression Optimization Transformer Models
There are 3 model compression optimization models tracked. The highest-rated is QKV-Core/QKV-Core at 29/100 with 18 stars.
Get all 3 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=model-compression-optimization&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
QKV-Core/QKV-Core
"Adaptive Hybrid Quantization Framework for deploying 7B+ LLMs on low-VRAM... |
|
Experimental |
| 2 |
Mainframework/Quanta
Convert and quantize llm models |
|
Experimental |
| 3 |
ES7/Quantization-in-ML
Quantizing LLMs for utlizing its power in efficient way |
|
Experimental |