livehl/aimirror
🚀 200倍速!AI时代的下载神器 | Docker/PyPI/HuggingFace/CRAN 全加速 | 并行分片+智能缓存,让下载飞起来
Struggling with slow downloads of large files like AI models, Docker images, or Python packages? This tool accelerates downloads from popular repositories like PyPI, Docker Hub, Hugging Face, and CRAN by using parallel fetching and smart local caching. It acts as a local mirror, taking your standard download requests and serving the files back to you much faster. This is ideal for data scientists, machine learning engineers, and software developers who frequently download large dependencies.
671 stars. Actively maintained with 12 commits in the last 30 days. Available on PyPI.
Use this if you are an engineer or scientist frequently downloading large packages, models, or container images from repositories like PyPI, Docker Hub, Hugging Face, or CRAN over a slow network, and you need significantly faster download speeds and reduced repetitive downloads.
Not ideal if you only download small files occasionally, do not work with the mentioned repositories, or already have a very fast and reliable network connection without a need for caching.
Stars
671
Forks
5
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
12
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/livehl/aimirror"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
AlexsJones/llmfit
Hundreds of models & providers. One command to find what runs on your hardware.
victordibia/llmx
An API for Chat Fine-Tuned Large Language Models (llm)
Chen-zexi/vllm-cli
A command-line interface tool for serving LLM using vLLM.
InftyAI/llmaz
☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!
TakatoHonda/sui-lang
粋 (Sui) - A programming language optimized for LLM code generation