Angel-ML/serving
A stand alone industrial serving system for angel.
This project helps operations engineers and MLOps professionals deploy machine learning and deep learning models for real-time predictions. It takes trained models from various platforms like PyTorch, Spark, or XGBoost, and serves them via gRPC or RESTful API endpoints. This allows applications to send new data and receive immediate predictions from your models.
No commits in the last 6 months.
Use this if you need a high-performance system to serve your machine learning models in a production environment and require features like model version control and performance monitoring.
Not ideal if you are looking for a platform to train your models, as this system focuses solely on model deployment and serving.
Stars
66
Forks
35
Language
Java
License
—
Category
Last pushed
Apr 12, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Angel-ML/serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
tensorflow/serving
A flexible, high-performance serving system for machine learning models