romilandc/langchain-RAG

A RAG implementation on LangChain using Chroma vector db as storage. Take some pdfs, store them in the db, use LLM to inference.

20
/ 100
Experimental

This tool helps you quickly get answers from your PDF documents using a large language model. You provide your own PDFs, and it processes them to allow you to ask questions and receive relevant answers generated by the AI. It's ideal for researchers, analysts, or anyone who needs to extract information efficiently from a collection of documents without manually reading through everything.

No commits in the last 6 months.

Use this if you have a collection of PDF documents and want to quickly find specific information or synthesize answers by querying them with an AI.

Not ideal if you need to perform complex data analysis on structured data within PDFs or require the AI to write entirely new content unrelated to your documents.

document-query information-extraction research-assistance knowledge-retrieval pdf-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Apache-2.0

Category

local-rag-stacks

Last pushed

May 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/romilandc/langchain-RAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.