aporia-ai/inferencedb
🚀 Stream inferences of real-time ML models in production to any data lake (Experimental)
This tool helps machine learning engineers and MLOps teams automatically send the predictions and inputs from real-time ML models in production into a data lake. It takes streaming model inference data from Kafka and stores it in formats like Parquet on S3. This enables crucial tasks like model retraining, drift monitoring, and performance tracking.
No commits in the last 6 months.
Use this if you need to reliably capture all of your live machine learning model's inputs and outputs for later analysis, auditing, or model improvement cycles.
Not ideal if your models are not real-time, you don't use Kafka for data streams, or you're looking for an all-in-one MLOps platform rather than a specialized inference logging tool.
Stars
81
Forks
3
Language
Python
License
—
Category
Last pushed
Jun 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/aporia-ai/inferencedb"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
feast-dev/feast
The Open Source Feature Store for AI/ML
clearml/clearml-serving
ClearML - Model-Serving Orchestration and Repository Solution
lakehq/sail
LakeSail's computation framework with a mission to unify batch processing, stream processing,...
PaddlePaddle/Serving
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
SeldonIO/MLServer
An inference server for your machine learning models, including support for multiple frameworks,...