yankeexe/ollama-manager
🦙 Manage Ollama models from your CLI!
This tool helps AI practitioners and developers easily manage large language models (LLMs) and vision models for local inference. It takes your model preferences, searches online libraries like Ollama and Hugging Face, and allows you to download, delete, and run models locally. It's designed for anyone experimenting with, or deploying, LLMs on their own machine.
No commits in the last 6 months. Available on PyPI.
Use this if you need a straightforward way to discover, download, and manage various large language models (LLMs) and vision models on your local machine.
Not ideal if you are looking for a cloud-based model deployment solution or a tool for fine-tuning models.
Stars
16
Forks
3
Language
Python
License
—
Category
Last pushed
Aug 25, 2025
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/yankeexe/ollama-manager"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINOâ„¢
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...
generative-computing/mellea
Mellea is a library for writing generative programs.