asprenger/ray_vllm_inference

A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.

39
/ 100
Emerging

This service helps developers serve large language models (LLMs) quickly and efficiently. It takes an LLM from Hugging Face and serves it as an API endpoint, returning generated text based on prompts. This is for machine learning engineers or MLOps teams who need to deploy LLMs for applications requiring high throughput and responsiveness.

No commits in the last 6 months.

Use this if you need to deploy large language models with features like continuous batching, streaming output, and multi-GPU support for production applications.

Not ideal if you are looking for a simple, low-code solution for basic LLM prompting without needing to manage infrastructure or optimize for high-scale serving.

LLM deployment MLOps AI infrastructure API serving scalable inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

78

Forks

11

Language

Python

License

Apache-2.0

Last pushed

Apr 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/asprenger/ray_vllm_inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.