loong64/ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
Ollama helps you run powerful large language models (LLMs) like Llama, Gemma, and Phi directly on your own computer. You feed it a model from its library and your text input, and it gives you intelligent responses, summaries, or even code. This is perfect for anyone who wants to experiment with or build applications using AI, from individual researchers to small teams needing local AI capabilities.
Use this if you want to run various large language models on your local machine for privacy, cost savings, or offline access, without needing cloud services.
Not ideal if you need a pre-built, production-ready AI application with a graphical user interface and advanced management features out-of-the-box.
Stars
9
Forks
1
Language
Dockerfile
License
MIT
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/loong64/ollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.