gjbex/Deploying-LLMs-locally
Material for a training on AI tools
This project helps researchers and engineers who work with high-performance computing (HPC) systems understand how to run large language models (LLMs) on their local infrastructure. It provides a presentation, source code, and scripts for downloading models and data to get LLMs working in a local environment. The target user is someone in scientific research or engineering operations who needs to deploy AI tools without relying on external cloud services.
Use this if you are an HPC user or researcher looking to run large language models directly on your local machines or institutional hardware.
Not ideal if you prefer using cloud-based LLM services or are not familiar with managing software environments on local infrastructure.
Stars
18
Forks
6
Language
Jupyter Notebook
License
CC-BY-4.0
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gjbex/Deploying-LLMs-locally"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PaddlePaddle/PaddleNLP
Easy-to-use and powerful LLM and SLM library with awesome model zoo.
meta-llama/llama-cookbook
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started...
arcee-ai/mergekit
Tools for merging pretrained large language models.
changyeyu/LLM-RL-Visualized
๐100+ ๅๅ LLM / RL ๅ็ๅพ๐๏ผใๅคงๆจกๅ็ฎๆณใไฝ่ ๅทจ็ฎ๏ผ๐ฅ๏ผ100+ LLM/RL Algorithm Maps ๏ผ
mindspore-lab/step_into_llm
MindSpore online courses: Step into LLM