gjbex/Deploying-LLMs-locally

Material for a training on AI tools

48
/ 100
Emerging

This project helps researchers and engineers who work with high-performance computing (HPC) systems understand how to run large language models (LLMs) on their local infrastructure. It provides a presentation, source code, and scripts for downloading models and data to get LLMs working in a local environment. The target user is someone in scientific research or engineering operations who needs to deploy AI tools without relying on external cloud services.

Use this if you are an HPC user or researcher looking to run large language models directly on your local machines or institutional hardware.

Not ideal if you prefer using cloud-based LLM services or are not familiar with managing software environments on local infrastructure.

HPC scientific-computing AI-deployment machine-learning-operations local-AI
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

18

Forks

6

Language

Jupyter Notebook

License

CC-BY-4.0

Last pushed

Feb 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gjbex/Deploying-LLMs-locally"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.