FareedKhan-dev/rag-with-rl

Maximizing the Performance of a Simple RAG using RL

45
/ 100
Emerging

This project helps anyone building applications that use Large Language Models (LLMs) to answer questions based on a set of provided documents. It takes your documents and questions, and instead of just retrieving information, it uses a "Reinforcement Learning (RL)" approach to select the most relevant document chunks more effectively. This leads to more accurate answers from the LLM, making it valuable for anyone developing AI-powered information retrieval or Q&A systems.

No commits in the last 6 months.

Use this if you are developing an AI system that answers user questions based on a specific set of documents and you are finding that your LLM sometimes provides inaccurate answers due to insufficient or irrelevant context.

Not ideal if you are looking for a complete, production-ready RAG application or if your primary goal is general-purpose LLM fine-tuning without a focus on document-based Q&A.

AI-powered Q&A information retrieval LLM application development knowledge base search contextual AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

90

Forks

24

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/FareedKhan-dev/rag-with-rl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.