yvonwin/qwen2.cpp

qwen2 and llama3 cpp implementation

33
/ 100
Emerging

This project allows you to run powerful large language models (LLMs) like Qwen2 and Llama3 directly on your own computer, even without specialized cloud infrastructure. You provide a pre-trained model, and it gives you a local, interactive chatbot or an API server for integrating AI into your applications. It's designed for technical users who want to deploy and experiment with advanced language AI locally.

No commits in the last 6 months.

Use this if you need to run Qwen2 or Llama3 models on your local machine, whether for direct interaction or to power a local application, ensuring data privacy and reducing reliance on external services.

Not ideal if you're looking for a managed, cloud-based AI service or a ready-to-use application that doesn't require any technical setup.

local-AI-deployment large-language-models AI-model-inference private-AI-chatbots
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

48

Forks

4

Language

C++

License

Last pushed

Jun 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yvonwin/qwen2.cpp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.