profullstack/infernet-protocol
Infernet: A Peer-to-Peer Distributed GPU Inference Protocol
This is a foundational development repository for building a peer-to-peer network for distributed GPU inference. It provides the core application structure and database schema for managing nodes, providers, aggregators, clients, models, and jobs within such a network. Developers working on the Infernet Protocol would use this to set up and run the local development environment for the web and desktop applications.
Use this if you are a developer building or extending the Infernet Protocol and need to set up its web or desktop application locally.
Not ideal if you are an end-user looking to simply consume or run GPU inference tasks without developing the underlying protocol.
Stars
22
Forks
—
Language
JavaScript
License
ISC
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/profullstack/infernet-protocol"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AlexsJones/llmfit
Hundreds of models & providers. One command to find what runs on your hardware.
victordibia/llmx
An API for Chat Fine-Tuned Large Language Models (llm)
Chen-zexi/vllm-cli
A command-line interface tool for serving LLM using vLLM.
InftyAI/llmaz
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
livehl/aimirror
🚀 200倍速!AI时代的下载神器 | Docker/PyPI/HuggingFace/CRAN 全加速 | 并行分片+智能缓存,让下载飞起来