JetBrains-Research/kinference

Running ONNX models in vanilla Kotlin

37
/ 100
Emerging

KInference helps software developers integrate and run pre-trained machine learning models (saved in the ONNX format) directly within their Kotlin applications. It takes an ONNX model and your application's data as input, then produces the model's predictions or outputs. This library is ideal for developers building Kotlin-based applications, whether desktop, mobile, or web, who need to embed AI capabilities without heavy external dependencies.

203 stars. No commits in the last 6 months.

Use this if you are a Kotlin developer who needs to embed machine learning model inference directly into your application, ensuring it runs efficiently on various platforms like JVM or JavaScript environments.

Not ideal if you are primarily training new machine learning models, as KInference is optimized for inference (running models), not for model development or training.

Kotlin-development ML-integration cross-platform-ML application-development edge-AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

203

Forks

9

Language

Kotlin

License

Apache-2.0

Last pushed

Sep 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JetBrains-Research/kinference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.