cocktailpeanut/dalai
The simplest way to run LLaMA on your local machine
This project helps you run large language models, specifically LLaMA and Alpaca, directly on your personal computer. It takes the model files as input and provides an accessible web interface and APIs for you to interact with the models. Anyone interested in experimenting with local AI chat models without needing cloud services would find this useful.
12,980 stars. No commits in the last 6 months.
Use this if you want to run LLaMA or Alpaca language models locally on your Mac, Windows, or Linux machine for personal use or development.
Not ideal if you need to run highly specialized or custom large language models that are not LLaMA or Alpaca, or if your computer has very limited memory and disk space.
Stars
12,980
Forks
1,352
Language
CSS
License
—
Category
Last pushed
Jun 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cocktailpeanut/dalai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ludwig-ai/ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
withcatai/node-llama-cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema...
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and...
zhudotexe/kani
kani (カニ) is a highly hackable microframework for tool-calling language models. (NLP-OSS @ EMNLP 2023)
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.