RamboRogers/cyber-inference

Cyber-Inference is a web GUI management tool for running OpenAI-compatible inference servers. Built on llama.cpp, it provides automatic model management, dynamic resource allocation, and a beautiful cyberpunk-themed interface designed for edge deployment.

35
/ 100
Emerging

This tool provides a centralized hub to manage and run various artificial intelligence models directly on your own computer or server. It allows you to easily download models for chat, text generation, and speech-to-text, then use them through a simple web interface or a standard API. It's designed for system administrators or developers who need to deploy and control multiple local AI models for various applications.

Use this if you need a user-friendly way to host and manage different local AI models (like large language models or speech recognition models) on your own hardware, accessible via a standard API.

Not ideal if you're a casual user looking for a simple desktop AI application, or if you primarily rely on cloud-based AI services.

AI deployment local LLMs edge AI speech-to-text model management
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

GPL-3.0

Last pushed

Feb 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/RamboRogers/cyber-inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.