kossisoroyce/timber
Ollama for classical ML models. AOT compiler that turns XGBoost, LightGBM, scikit-learn, CatBoost & ONNX models into native C99 inference code. One command to load, one command to serve. 336x faster than Python inference.
This tool helps data science and engineering teams deploy trained machine learning models — like those from XGBoost, scikit-learn, or LightGBM — to production systems. It takes your existing model file and converts it into a highly efficient, small C-code artifact that can be served with lightning-fast prediction speeds. This is ideal for platform, fraud, risk, or IoT teams needing to integrate ML predictions directly into performance-critical applications.
636 stars.
Use this if you need to serve machine learning model predictions with extremely low latency, minimal memory footprint, and high reliability in production environments.
Not ideal if your models are primarily deep learning neural networks or if you are not concerned with sub-millisecond prediction times and dependency-free deployment.
Stars
636
Forks
18
Language
Python
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kossisoroyce/timber"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.