monk1337/auto-ollama
run ollama & gguf easily with a single command
Auto-Ollama helps you run large language models (LLMs) like Mistral or Gemma directly on your own computer, rather than relying on cloud services. You provide the name of a model, and it helps you get a local version running. This is ideal for developers, researchers, or anyone experimenting with LLMs who wants to keep their data private or reduce API costs.
No commits in the last 6 months.
Use this if you want to deploy and run large language models locally for inference, or convert Hugging Face models into a local, efficient format.
Not ideal if you primarily need to quantize models, as a dedicated tool, Auto-QuantLLM, is under development for that purpose.
Stars
52
Forks
5
Language
Shell
License
Apache-2.0
Category
Last pushed
May 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/monk1337/auto-ollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
bodaay/HuggingFaceModelDownloader
Simple go utility to download HuggingFace Models and Datasets
NVIDIA/kvpress
LLM KV cache compression made easy