Angel-ML/serving

A stand alone industrial serving system for angel.

45
/ 100
Emerging

This project helps operations engineers and MLOps professionals deploy machine learning and deep learning models for real-time predictions. It takes trained models from various platforms like PyTorch, Spark, or XGBoost, and serves them via gRPC or RESTful API endpoints. This allows applications to send new data and receive immediate predictions from your models.

No commits in the last 6 months.

Use this if you need a high-performance system to serve your machine learning models in a production environment and require features like model version control and performance monitoring.

Not ideal if you are looking for a platform to train your models, as this system focuses solely on model deployment and serving.

model deployment machine learning operations real-time inference AI infrastructure predictive analytics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

66

Forks

35

Language

Java

License

Last pushed

Apr 12, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Angel-ML/serving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.