yvonwin/qwen2.cpp
qwen2 and llama3 cpp implementation
This project allows you to run powerful large language models (LLMs) like Qwen2 and Llama3 directly on your own computer, even without specialized cloud infrastructure. You provide a pre-trained model, and it gives you a local, interactive chatbot or an API server for integrating AI into your applications. It's designed for technical users who want to deploy and experiment with advanced language AI locally.
No commits in the last 6 months.
Use this if you need to run Qwen2 or Llama3 models on your local machine, whether for direct interaction or to power a local application, ensuring data privacy and reducing reliance on external services.
Not ideal if you're looking for a managed, cloud-based AI service or a ready-to-use application that doesn't require any technical setup.
Stars
48
Forks
4
Language
C++
License
—
Category
Last pushed
Jun 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yvonwin/qwen2.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
LLM-Red-Team/qwen-free-api
🚀...
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by...
willbnu/Qwen-3.5-16G-Vram-Local
Configs, launchers, benchmarks, and tooling for running Qwen3.5 GGUF models locally with...
QwenLM/qwen.cpp
C++ implementation of Qwen-LM