RamboRogers/cyber-inference
Cyber-Inference is a web GUI management tool for running OpenAI-compatible inference servers. Built on llama.cpp, it provides automatic model management, dynamic resource allocation, and a beautiful cyberpunk-themed interface designed for edge deployment.
This tool provides a centralized hub to manage and run various artificial intelligence models directly on your own computer or server. It allows you to easily download models for chat, text generation, and speech-to-text, then use them through a simple web interface or a standard API. It's designed for system administrators or developers who need to deploy and control multiple local AI models for various applications.
Use this if you need a user-friendly way to host and manage different local AI models (like large language models or speech recognition models) on your own hardware, accessible via a standard API.
Not ideal if you're a casual user looking for a simple desktop AI application, or if you primarily rely on cloud-based AI services.
Stars
10
Forks
1
Language
Python
License
GPL-3.0
Category
Last pushed
Feb 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/RamboRogers/cyber-inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InfinitiBit/graphbit
GraphBit is the world’s first enterprise-grade Agentic AI framework, built on a Rust core with a...
autogluon/autogluon-assistant
Multi-Agent System Powered by LLMs for End-to-end Multimodal ML Automation
pguso/agents-from-scratch
Build AI agents from first principles using a local LLM - no frameworks, no cloud APIs, no...
samholt/L2MAC
🚀 The LLM Automatic Computer Framework: L2MAC
pguso/ai-agents-from-scratch
Demystify AI agents by building them yourself. Local LLMs, no black boxes, real understanding of...