loong64/ollama

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.

39
/ 100
Emerging

Ollama helps you run powerful large language models (LLMs) like Llama, Gemma, and Phi directly on your own computer. You feed it a model from its library and your text input, and it gives you intelligent responses, summaries, or even code. This is perfect for anyone who wants to experiment with or build applications using AI, from individual researchers to small teams needing local AI capabilities.

Use this if you want to run various large language models on your local machine for privacy, cost savings, or offline access, without needing cloud services.

Not ideal if you need a pre-built, production-ready AI application with a graphical user interface and advanced management features out-of-the-box.

AI-experimentation local-AI-development language-model-deployment prompt-engineering content-generation
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Dockerfile

License

MIT

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/loong64/ollama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.