yankeexe/ollama-manager

🦙 Manage Ollama models from your CLI!

38
/ 100
Emerging

This tool helps AI practitioners and developers easily manage large language models (LLMs) and vision models for local inference. It takes your model preferences, searches online libraries like Ollama and Hugging Face, and allows you to download, delete, and run models locally. It's designed for anyone experimenting with, or deploying, LLMs on their own machine.

No commits in the last 6 months. Available on PyPI.

Use this if you need a straightforward way to discover, download, and manage various large language models (LLMs) and vision models on your local machine.

Not ideal if you are looking for a cloud-based model deployment solution or a tool for fine-tuning models.

large-language-models local-inference model-management AI-development machine-learning-operations
No License Stale 6m
Maintenance 2 / 25
Adoption 6 / 25
Maturity 17 / 25
Community 13 / 25

How are scores calculated?

Stars

16

Forks

3

Language

Python

License

Last pushed

Aug 25, 2025

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/yankeexe/ollama-manager"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.