basetenlabs/truss
The simplest way to serve AI/ML models in production
This tool helps machine learning engineers and data scientists easily deploy their AI models into a production environment. You provide your trained model and its serving logic in Python, and it outputs a production-ready API endpoint. This simplifies the process of getting models from development to a usable service.
1,125 stars. Used by 1 other package. Actively maintained with 61 commits in the last 30 days. Available on PyPI.
Use this if you need to quickly and reliably turn a machine learning model into an API that can handle real-world requests, without getting bogged down in infrastructure details like Docker or Kubernetes.
Not ideal if you primarily develop and deploy non-ML applications, or if you require full manual control over every containerization and server configuration aspect outside of a Baseten deployment.
Stars
1,125
Forks
95
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
61
Dependencies
27
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/basetenlabs/truss"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
tensorflow/serving
A flexible, high-performance serving system for machine learning models
labmlai/labml
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱