g1ibby/llm-deploy

Tool to manage ollama model on vast.ai

27
/ 100
Experimental

This tool helps developers quickly set up and manage large language models (LLMs) like Llama on cloud servers through vast.ai. You provide configuration details for your desired LLMs, and the tool automates their deployment and lifecycle. It's designed for developers who want to experiment with or host LLMs without manual server configuration.

No commits in the last 6 months.

Use this if you are a developer looking for an automated way to deploy and manage Ollama-compatible LLMs on vast.ai for experimentation or hosting.

Not ideal if you prefer a graphical user interface for managing cloud instances or if you're not comfortable with command-line tools and YAML configurations.

LLM deployment Cloud infrastructure Model hosting AI development MLOps
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Python

License

MIT

Last pushed

Apr 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/g1ibby/llm-deploy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.