monk1337/auto-ollama

run ollama & gguf easily with a single command

34
/ 100
Emerging

Auto-Ollama helps you run large language models (LLMs) like Mistral or Gemma directly on your own computer, rather than relying on cloud services. You provide the name of a model, and it helps you get a local version running. This is ideal for developers, researchers, or anyone experimenting with LLMs who wants to keep their data private or reduce API costs.

No commits in the last 6 months.

Use this if you want to deploy and run large language models locally for inference, or convert Hugging Face models into a local, efficient format.

Not ideal if you primarily need to quantize models, as a dedicated tool, Auto-QuantLLM, is under development for that purpose.

LLM deployment local AI AI experimentation model conversion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

52

Forks

5

Language

Shell

License

Apache-2.0

Last pushed

May 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/monk1337/auto-ollama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.