Alexyskoutnev/TurboInference
Welcome to TurboInference, a high-performance inference toolkit written in C++ for rapid and efficient deployment of LLM models. This GitHub repository provides a comprehensive set of tools and utilities designed to make inference tasks swift and resource-efficient.
No commits in the last 6 months.
Stars
1
Forks
—
Language
—
License
MIT
Category
Last pushed
Nov 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Alexyskoutnev/TurboInference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...