livepeer/ai-runner

Inference runtime for running different batch and real-time AI pipelines.

53
/ 100
Established

This project helps developers integrate and manage AI inference within the Livepeer network. It takes trained AI models and inference requests, processes them efficiently using GPU memory, and provides the generated AI output. The primary users are developers building or maintaining applications that require AI model execution on the Livepeer platform.

Use this if you are a developer looking to deploy and run various AI models for inference as part of a distributed AI pipeline on the Livepeer network.

Not ideal if you are an end-user without programming experience, as this is a technical tool for developers to integrate AI capabilities into their applications.

AI-inference distributed-computing model-deployment application-development backend-engineering
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

25

Forks

31

Language

Python

License

MIT

Last pushed

Jan 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/livepeer/ai-runner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.