LMLK-seal/HuggingGGUF
Hugging Face Model downloader and GGUF Converter.
This tool simplifies managing large language models (LLMs) by providing a user-friendly interface to download models from Hugging Face and convert them into the GGUF format. It takes a Hugging Face model ID as input and outputs a GGUF-formatted model, which is essential for running models efficiently on local machines or specific applications. Researchers, developers, and AI enthusiasts who work with LLMs will find this beneficial.
No commits in the last 6 months.
Use this if you need to easily download large language models from Hugging Face and convert them into the GGUF format for efficient local deployment or use with specific applications.
Not ideal if you are already comfortable with command-line tools for model downloading and conversion or if you need to work with model formats other than GGUF.
Stars
13
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jun 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/LMLK-seal/HuggingGGUF"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
bodaay/HuggingFaceModelDownloader
Simple go utility to download HuggingFace Models and Datasets
NVIDIA/kvpress
LLM KV cache compression made easy