asprenger/ray_vllm_inference
A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.
This service helps developers serve large language models (LLMs) quickly and efficiently. It takes an LLM from Hugging Face and serves it as an API endpoint, returning generated text based on prompts. This is for machine learning engineers or MLOps teams who need to deploy LLMs for applications requiring high throughput and responsiveness.
No commits in the last 6 months.
Use this if you need to deploy large language models with features like continuous batching, streaming output, and multi-GPU support for production applications.
Not ideal if you are looking for a simple, low-code solution for basic LLM prompting without needing to manage infrastructure or optimize for high-scale serving.
Stars
78
Forks
11
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/asprenger/ray_vllm_inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
PaddlePaddle/FastDeploy
High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle
mlc-ai/mlc-llm
Universal LLM Deployment Engine with ML Compilation
skyzh/tiny-llm
A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny...
ServerlessLLM/ServerlessLLM
Serverless LLM Serving for Everyone.
AXERA-TECH/ax-llm
Explore LLM model deployment based on AXera's AI chips