dingodb/dingospeed
dingospeed is a self-hosted huggingface mirror service
This project helps machine learning practitioners and researchers efficiently access and manage large AI models and datasets. It acts as a local mirror for Hugging Face resources, taking model and dataset requests as input and providing faster, more reliable downloads as output. It is designed for anyone working with AI/ML who frequently downloads models or datasets, especially in environments with limited or slow internet access.
Use this if your team frequently downloads large AI models and datasets from Hugging Face, and you need to improve download speeds, reduce network traffic, or ensure reliable access in environments with intermittent or restricted internet.
Not ideal if you only occasionally download small AI models or datasets, or if your primary need is not related to download efficiency, local storage, or offline access.
Stars
30
Forks
15
Language
Go
License
—
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/dingodb/dingospeed"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)