FlowLLM-AI/flowllm

FlowLLM: Simplifying LLM-based HTTP/MCP Service Development

49
/ 100
Emerging

FlowLLM helps developers build AI-powered services by packaging Large Language Models (LLMs), embeddings, and vector databases into accessible HTTP or MCP services. It takes your custom AI logic and configuration, then generates ready-to-use API endpoints or command-line tools. This is for developers or teams looking to quickly deploy AI assistants, RAG applications, or complex AI workflows.

Used by 2 other packages. Available on PyPI.

Use this if you are a developer who needs to rapidly prototype and deploy LLM-based applications as services, without manually handling the API setup for each component.

Not ideal if you are looking for a no-code solution or a simple library for direct LLM interaction within an existing application.

AI service development API generation LLM deployment RAG application development workflow automation
Maintenance 10 / 25
Adoption 9 / 25
Maturity 24 / 25
Community 6 / 25

How are scores calculated?

Stars

32

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 18, 2026

Commits (30d)

0

Dependencies

19

Reverse dependents

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/FlowLLM-AI/flowllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.