rgryta/LLM-WSL2-Docker
One-click install for WizardLM-13B-Uncensored with oobabooga webui
This project offers a simple way to run a large language model (LLM) like WizardLM-13B-Uncensored on your Windows PC with an Nvidia GPU. It provides a user-friendly web interface from oobabooga, allowing you to interact with the LLM for text generation and other tasks. This is for Windows users who want to experiment with powerful, locally-run AI text models without deep technical setup.
No commits in the last 6 months.
Use this if you have a Windows 11 Pro PC with an Nvidia GPU and want to easily set up and use a large language model for text generation.
Not ideal if you have an AMD GPU, an older Windows version, or if you prefer to work with LLMs directly through programming APIs rather than a web interface.
Stars
21
Forks
5
Language
PowerShell
License
MIT
Category
Last pushed
Jun 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rgryta/LLM-WSL2-Docker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)