iverly/llamafile-docker
Distribute and run llamafile/LLMs with a single docker image.
This project helps developers and IT professionals quickly set up and run large language models (LLMs) locally using Docker. You provide a pre-trained LLM in GGUF format, and this project packages it into a ready-to-use Docker image, which can then be run as a server with a web UI or as a command-line tool. This is ideal for those who need to integrate LLM capabilities into their applications or workflows without managing complex environments.
No commits in the last 6 months.
Use this if you need a straightforward way to deploy and experiment with various open-source large language models on your own infrastructure.
Not ideal if you prefer to use cloud-managed LLM services or do not have experience with Docker.
Stars
74
Forks
9
Language
Dockerfile
License
Apache-2.0
Category
Last pushed
May 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/iverly/llamafile-docker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.