practicingman/bert_serving

export bert model for serving

47
/ 100
Emerging

This project helps machine learning engineers efficiently deploy trained BERT models for live predictions. It takes a pre-trained or fine-tuned BERT model and exports it into a 'SavedModel' format, optimized for quick serving. This is useful for developers who need to integrate BERT's natural language understanding capabilities into applications with low latency requirements.

141 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer looking to deploy a BERT model for real-time inference in a production environment.

Not ideal if you are looking for a high-level API or a pre-built serving solution that doesn't require direct manipulation of TensorFlow's export functions.

MLOps Model Deployment Natural Language Processing Deep Learning Inference TensorFlow Serving
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

141

Forks

36

Language

Python

License

Apache-2.0

Last pushed

Dec 12, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/practicingman/bert_serving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.