brontoguana/ktop
Terminal system resource monitor for hybrid LLM workloads
This tool helps developers and machine learning engineers monitor system resources in real-time when running AI models, especially large language models (LLMs). It takes raw system data on CPU, GPU, memory, and network usage and displays it in an intuitive, color-coded terminal interface. Its primary users are professionals who need to diagnose performance bottlenecks and resource consumption during LLM training or inference.
Use this if you are a developer or ML engineer frequently working with hybrid LLM workloads and need a lightweight, real-time view of your system's hardware performance directly in your terminal.
Not ideal if you are looking for a GUI-based monitoring solution or need to track resource usage on non-Linux operating systems like Windows or macOS.
Stars
64
Forks
6
Language
Rust
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/brontoguana/ktop"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
trymirai/uzu
A high-performance inference engine for AI models
justrach/bhumi
⚡ Bhumi – The fastest AI inference client for Python, built with Rust for unmatched speed,...
lipish/llm-connector
LLM Connector - A unified interface for connecting to various Large Language Model providers
keyvank/femtoGPT
Pure Rust implementation of a minimal Generative Pretrained Transformer
ShelbyJenkins/llm_client
The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from...