avilum/llama-saas
A client/server for LLaMA (Large Language Model Meta AI) that can run ANYWHERE.
This project provides a simple way to get real-time responses from a large language model (LLaMA) on almost any computer. You input a prompt or question, and the system generates a text response in real time. It's designed for researchers, academics, or anyone in government or civil society who needs to experiment with large language models for non-commercial research.
No commits in the last 6 months.
Use this if you are an academic or research professional who needs to run the LLaMA model for non-commercial research and want a client/server setup that works on standard CPU machines.
Not ideal if you need to deploy large language models for commercial applications or if you require extensive customization beyond basic text generation.
Stars
61
Forks
2
Language
Go
License
Apache-2.0
Category
Last pushed
Mar 25, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/avilum/llama-saas"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cel-ai/celai
Open source framework designed to accelerate the development of omnichannel AI virtual assistants.
sauravpanda/BrowserAI
Run local LLMs like llama, deepseek-distill, kokoro and more inside your browser
lone-cloud/gerbil
A desktop app for running Large Language Models locally.
vinjn/llm-metahuman
An open solution for AI-powered photorealistic digital humans.
cztomsik/ava
All-in-one desktop app for running LLMs locally.