jlonge4/local_llama

This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.

46
/ 100
Emerging

This tool helps you quickly get answers from your documents without an internet connection or sharing them with external services. You input PDF, TXT, DOCX, or MD files, and it allows you to ask questions and receive summaries or specific information based on their content. Anyone who needs to extract information from their own documents while ensuring privacy and offline access would find this useful.

298 stars. No commits in the last 6 months.

Use this if you need to privately chat with your documents and retrieve information without uploading them to cloud-based AI services or requiring an internet connection.

Not ideal if you're looking for a general-purpose AI chatbot that can answer questions beyond the scope of your uploaded files or if you prefer a fully managed cloud solution.

document-search information-retrieval data-privacy offline-analysis knowledge-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

298

Forks

47

Language

Python

License

Apache-2.0

Last pushed

Jul 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jlonge4/local_llama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.