justrach/bhumi
⚡ Bhumi – The fastest AI inference client for Python, built with Rust for unmatched speed, efficiency, and scalability 🚀
This tool helps developers who are building applications that use large language models (LLMs) and need to ensure their applications are extremely fast and efficient. It allows them to send requests to over nine different AI providers, including OpenAI, Anthropic, and Google Gemini, and receive responses. Developers use this to integrate AI capabilities into their products with high performance, handling tasks like text generation and image analysis.
Available on PyPI.
Use this if you are a developer building production-ready AI applications and need the fastest possible interaction with various LLM and vision APIs.
Not ideal if you are an end-user without programming experience or if your application does not require high-throughput, low-latency AI inference.
Stars
64
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 22, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/justrach/bhumi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
trymirai/uzu
A high-performance inference engine for AI models
lipish/llm-connector
LLM Connector - A unified interface for connecting to various Large Language Model providers
keyvank/femtoGPT
Pure Rust implementation of a minimal Generative Pretrained Transformer
ShelbyJenkins/llm_client
The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from...
rustformers/llm
[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models