kossisoroyce/timber

Ollama for classical ML models. AOT compiler that turns XGBoost, LightGBM, scikit-learn, CatBoost & ONNX models into native C99 inference code. One command to load, one command to serve. 336x faster than Python inference.

41
/ 100
Emerging

This tool helps data science and engineering teams deploy trained machine learning models — like those from XGBoost, scikit-learn, or LightGBM — to production systems. It takes your existing model file and converts it into a highly efficient, small C-code artifact that can be served with lightning-fast prediction speeds. This is ideal for platform, fraud, risk, or IoT teams needing to integrate ML predictions directly into performance-critical applications.

636 stars.

Use this if you need to serve machine learning model predictions with extremely low latency, minimal memory footprint, and high reliability in production environments.

Not ideal if your models are primarily deep learning neural networks or if you are not concerned with sub-millisecond prediction times and dependency-free deployment.

low-latency-inference edge-ai fraud-detection risk-modeling production-ml
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 11 / 25
Community 10 / 25

How are scores calculated?

Stars

636

Forks

18

Language

Python

License

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kossisoroyce/timber"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.