di37/LLM-Load-Unload-Ollama
This is a simple demonstration to show how to keep an LLM loaded for prolonged time in the memory or unloading the model immediately after inferencing when using it via Ollama.
When working with large language models (LLMs) through Ollama, this project helps you manage how they use your computer's memory. It demonstrates how to keep an LLM actively loaded for continuous use or unload it immediately after getting a response. This is useful for anyone running LLMs locally who needs to optimize memory usage.
No commits in the last 6 months.
Use this if you are running LLMs via Ollama and need to control whether the model stays in memory for quick subsequent queries or unloads to free up resources.
Not ideal if you are not using Ollama, or if you are not concerned with optimizing memory usage for local LLM inference.
Stars
13
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
May 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/di37/LLM-Load-Unload-Ollama"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NX-AI/xlstm
Official repository of the xLSTM.
sinanuozdemir/oreilly-hands-on-gpt-llm
Mastering the Art of Scalable and Efficient AI Model Deployment
DashyDashOrg/pandas-llm
Pandas-LLM
wxhcore/bumblecore
An LLM training framework built from the ground up, featuring a custom BumbleBee architecture...
MiniMax-AI/MiniMax-01
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model &...