kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference

Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for DocumentĀ Q&A

51
/ 100
Established

This project helps teams who need to extract specific information from long documents, like annual reports or legal filings, by asking questions in plain language. You input your documents and your questions, and it provides direct answers. This is ideal for analysts, researchers, or anyone handling sensitive information that can't be shared with external AI services.

974 stars. No commits in the last 6 months.

Use this if you need to run a question-answering system on your own private documents without relying on external cloud-based AI services, especially due to data privacy or cost concerns.

Not ideal if you're comfortable using commercial AI services like OpenAI's GPT-4 or if you need to process extremely large volumes of data very quickly on high-end GPUs.

document-analysis private-data information-retrieval enterprise-search compliance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

974

Forks

207

Language

Python

License

MIT

Last pushed

Nov 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.