kingabzpro/using-llama3-locally
Running llama3 using Ollama-Python, Curl, LangChain, Chroma, and User interface.
This project helps software developers and AI practitioners experiment with and integrate the Llama-3 large language model directly on their own computers. It provides instructions and code examples for running Llama-3 locally, accessing its API, and building simple AI applications with it. The output is a functional local Llama-3 instance and example AI applications.
No commits in the last 6 months.
Use this if you are a developer looking to run, test, and build applications with Llama-3 on your local machine without relying on cloud services.
Not ideal if you are a non-technical user simply looking to chat with an AI without any development or setup.
Stars
59
Forks
11
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
May 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kingabzpro/using-llama3-locally"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
CogitoNTNU/course-on-large-language-models
This is a course on how to to program with Large Language Models.
liux2/Langchain-LLM-Config
Langchain LLM config adapters
pmady/llmops
🚀 The Ultimate Curated List of LLMOps Tools, Frameworks, and Resources - A comprehensive...
nsourlos/LLM_evaluation_framework
Evaluate performance of LLM models for Q&A in any domain
MuhammadTahaNasir/llm-learning-hub
A hands-on collection of practical notebooks for learning and building with LLMs , including...