JetBrains-Research/kinference
Running ONNX models in vanilla Kotlin
KInference helps software developers integrate and run pre-trained machine learning models (saved in the ONNX format) directly within their Kotlin applications. It takes an ONNX model and your application's data as input, then produces the model's predictions or outputs. This library is ideal for developers building Kotlin-based applications, whether desktop, mobile, or web, who need to embed AI capabilities without heavy external dependencies.
203 stars. No commits in the last 6 months.
Use this if you are a Kotlin developer who needs to embed machine learning model inference directly into your application, ensuring it runs efficiently on various platforms like JVM or JavaScript environments.
Not ideal if you are primarily training new machine learning models, as KInference is optimized for inference (running models), not for model development or training.
Stars
203
Forks
9
Language
Kotlin
License
Apache-2.0
Category
Last pushed
Sep 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JetBrains-Research/kinference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SKaiNET-developers/SKaiNET
SKaiNET is an open-source deep learning framework written in Kotlin Multiplatform, designed with...
ildu00/ReGraph
The world's cheapest AI inference & training marketplace. Pay up to 80% less than traditional...
OpenMined/KotlinSyft
The official Syft worker for secure on-device machine learning
KotlinNLP/SimpleDNN
SimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to...
sekwiatkowski/komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.