AaronFeng753/Ollama-Model-Dumper
Export and Backup Ollama models into GGUF and ModelFile
This tool helps developers and AI/ML practitioners manage their local Ollama large language models. It takes your installed Ollama models as input and produces backup files in GGUF and ModelFile formats, which can then be imported back into Ollama or transferred. It’s for anyone working with local LLMs who needs to save, restore, or move their models reliably.
No commits in the last 6 months.
Use this if you need to create backups of your Ollama models, migrate them to another machine, or ensure you can restore them after system changes.
Not ideal if you're not using Ollama, or if you're looking for a solution to manage cloud-based LLMs.
Stars
92
Forks
22
Language
Python
License
—
Category
Last pushed
Sep 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AaronFeng753/Ollama-Model-Dumper"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
bodaay/HuggingFaceModelDownloader
Simple go utility to download HuggingFace Models and Datasets
NVIDIA/kvpress
LLM KV cache compression made easy