datawhalechina/handy-ollama
动手学Ollama,CPU玩转大模型部署,在线阅读地址:https://datawhalechina.github.io/handy-ollama/
This project provides a tutorial for deploying large language models (LLMs) locally on your personal computer, even without a powerful graphics card (GPU). It guides you through installing Ollama, importing various LLM formats, and using them for applications like local chatbots or AI assistants. This is for developers, researchers, or enthusiasts who want to experiment with or build applications using LLMs without relying on cloud services or high-end hardware.
2,277 stars.
Use this if you want to run and manage large language models on your local machine using your computer's CPU, avoiding cloud costs or GPU limitations.
Not ideal if you require extremely high performance for complex, large-scale LLM training or inference that inherently demands specialized GPU hardware.
Stars
2,277
Forks
287
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/datawhalechina/handy-ollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
WangRongsheng/awesome-LLM-resources
🧑🚀 全世界最好的LLM资料总结(多模态生成、Agent、辅助编程、AI审稿、数据处理、模型训练、模型推理、o1 模型、MCP、小语言模型、视觉语言模型) | Summary of the...
SylphAI-Inc/AdalFlow
AdalFlow: The library to build & auto-optimize LLM applications.
LazyAGI/LazyLLM
Easiest and laziest way for building multi-agent LLMs applications.
luhengshiwo/LLMForEverybody
每个人都能看懂的大模型知识分享,LLMs春/秋招大模型面试前必看,让你和面试官侃侃而谈
katanaml/sparrow
Structured data extraction and instruction calling with ML, LLM and Vision LLM