eora-ai/inferoxy
Service for quick deploying and using dockerized Computer Vision models
This service helps machine learning engineers and MLOps specialists simplify the deployment and scaling of computer vision models. You feed it your pre-trained computer vision models packaged as Docker images, and it provides an accessible endpoint (like a REST API) for applications to send images and receive inference results. It's designed for teams looking to efficiently run many computer vision models on shared infrastructure.
No commits in the last 6 months.
Use this if you are a machine learning engineer responsible for taking computer vision models from development into a production environment where they need to handle real-time image processing requests.
Not ideal if you are looking for a tool to train or develop computer vision models, or if you only need to run a single model on an infrequent basis without concerns for high-volume inference.
Stars
95
Forks
3
Language
Python
License
GPL-3.0
Category
Last pushed
Jul 15, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/eora-ai/inferoxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
feast-dev/feast
The Open Source Feature Store for AI/ML
clearml/clearml-serving
ClearML - Model-Serving Orchestration and Repository Solution
lakehq/sail
LakeSail's computation framework with a mission to unify batch processing, stream processing,...
PaddlePaddle/Serving
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
SeldonIO/MLServer
An inference server for your machine learning models, including support for multiple frameworks,...