marklysze/LlamaIndex-RAG-WSL-CUDA

Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B

31
/ 100
Emerging

This project helps you to ask questions and get summaries from your Word documents using a local, powerful AI model on your Windows machine. It takes your Word documents as input and provides relevant answers or summaries, sometimes even citing which document lines informed the answer. This is for researchers, analysts, or anyone who needs to quickly extract information from large collections of documents without sending data to external AI services.

132 stars. No commits in the last 6 months.

Use this if you have a Windows machine with an Nvidia graphics card and want to use open-source large language models to query your local Word documents for information.

Not ideal if you don't have an Nvidia graphics card or prefer to use cloud-based AI services for document analysis.

document-analysis information-retrieval local-AI research-assist knowledge-extraction
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

132

Forks

14

Language

Jupyter Notebook

License

Last pushed

Feb 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/marklysze/LlamaIndex-RAG-WSL-CUDA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.