livepeer/ai-runner
Inference runtime for running different batch and real-time AI pipelines.
This project helps developers integrate and manage AI inference within the Livepeer network. It takes trained AI models and inference requests, processes them efficiently using GPU memory, and provides the generated AI output. The primary users are developers building or maintaining applications that require AI model execution on the Livepeer platform.
Use this if you are a developer looking to deploy and run various AI models for inference as part of a distributed AI pipeline on the Livepeer network.
Not ideal if you are an end-user without programming experience, as this is a technical tool for developers to integrate AI capabilities into their applications.
Stars
25
Forks
31
Language
Python
License
MIT
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/livepeer/ai-runner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...