linonetwo/MOSS-DockerFile
用于在 Docker 里运行复旦的 MOSS 语言模型,使用 GradIO 提供 WebUI。
This project helps individuals run the Fudan MOSS large language model locally on their computer using Docker. You provide the MOSS model files, and it sets up a web interface through Gradio, allowing you to interact with the model directly in your browser. This is for researchers, developers, or enthusiasts who want to experiment with or use the MOSS model without complex setup.
No commits in the last 6 months.
Use this if you have a powerful computer with a NVIDIA GPU (e.g., a 3090ti or better) and want to run the MOSS language model in a user-friendly web interface.
Not ideal if you don't have access to a high-end NVIDIA GPU with at least 14GB of VRAM, or if you prefer cloud-based LLM solutions.
Stars
16
Forks
3
Language
Python
License
MIT
Category
Last pushed
Dec 15, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/linonetwo/MOSS-DockerFile"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
containers/ramalama
RamaLama is an open-source developer tool that simplifies the local serving of AI models from...
av/harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
RunanywhereAI/runanywhere-sdks
Production ready toolkit to run AI locally
runpod-workers/worker-vllm
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
foldl/chatllm.cpp
Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)