Flowm/llm-stack

Docker compose config for local and hosted llms with multiple chat interfaces

42
/ 100
Emerging

This project provides a ready-to-use setup for individuals or small teams to experiment with and deploy large language models (LLMs). It allows you to use various AI models, both running locally on your computer and from cloud providers like OpenAI or Google, through multiple chat interfaces similar to ChatGPT. This is ideal for researchers, developers, or even hobbyists who want to quickly set up and interact with LLMs.

Use this if you want a complete, self-contained environment to manage and interact with multiple LLMs, both local and cloud-based, without complex individual setups.

Not ideal if you require a highly customized, enterprise-scale LLM deployment with bespoke security and integration requirements.

LLM experimentation AI model deployment conversational AI prototyping developer tools AI research environment
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

11

Forks

4

Language

Python

License

MIT

Last pushed

Oct 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Flowm/llm-stack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.