nuance1979/llama-server
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
This project helps you run a local chatbot interface using LLaMA models on your computer. You provide your chosen LLaMA model files, and it gives you a user-friendly chat interface in your web browser. This is ideal for researchers, developers, or anyone who wants to experiment with large language models privately.
134 stars. No commits in the last 6 months. Available on PyPI.
Use this if you want to set up and interact with local LLaMA language models through a clean, web-based chat interface without relying on external cloud services.
Not ideal if you're looking for a simple, plug-and-play chatbot experience without needing to manage local model files and server setup.
Stars
134
Forks
14
Language
Python
License
MIT
Category
Last pushed
Jun 10, 2023
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nuance1979/llama-server"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mishushakov/llm-scraper
Turn any webpage into structured data using LLMs
Mobile-Artificial-Intelligence/maid
Maid is a free and open source application for interfacing with llama.cpp models locally, and...
run-llama/LlamaIndexTS
Data framework for your LLM applications. Focus on server side solution
JHubi1/ollama-app
A modern and easy-to-use client for Ollama
serge-chat/serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.