mirpo/fastapi-gen
Build LLM-enabled FastAPI applications without build configuration.
This project helps Python backend developers quickly set up production-ready API applications without dealing with extensive build configuration. You provide a project name and optionally choose a template, and it generates a fully functional FastAPI application with boilerplate code, tests, and documentation. It's designed for developers building web services, including those powered by large language models.
Available on PyPI.
Use this if you are a Python developer who wants to quickly bootstrap a new FastAPI project, especially one involving AI text processing or local LLM inference, and minimize setup time.
Not ideal if you need a front-end application, a non-Python backend, or a highly customized project structure from scratch that doesn't align with common enterprise patterns.
Stars
11
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mirpo/fastapi-gen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.