biraj21/llm-server-from-scratch

FastAPI server for locally serving Gemma 3 270M & OpenAI Whisper with batched inference and streaming support.

13
/ 100
Experimental

This project helps developers experiment with deploying and serving large language models (LLMs) and speech-to-text models locally. It takes text prompts or audio inputs and provides generated text or transcribed speech as output. It is designed for software developers or machine learning engineers interested in understanding model serving fundamentals, rather than deploying production systems.

No commits in the last 6 months.

Use this if you are a developer learning about serving LLMs or speech models and want to experiment with features like batched inference and streaming locally.

Not ideal if you need a robust, production-ready solution for deploying AI models or if you are not comfortable with command-line tools and Python development.

LLM deployment speech-to-text services machine learning infrastructure AI model serving developer experimentation
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

HTML

License

Last pushed

Sep 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/biraj21/llm-server-from-scratch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.