Medical-RAG-Chatbot and End-to-End-Medical-Chatbot

These are competitors—both implement medical RAG chatbots using LangChain and LLM-based question-answering, but differ in their vector store choice (FAISS vs. Pinecone) and LLM source (Mistral via HuggingFace vs. unspecified), making them alternative solutions for the same use case.

Medical-RAG-Chatbot
36
Emerging
End-to-End-Medical-Chatbot
25
Experimental
Maintenance 10/25
Adoption 1/25
Maturity 13/25
Community 12/25
Maintenance 10/25
Adoption 4/25
Maturity 11/25
Community 0/25
Stars: 1
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 8
Forks:
Downloads:
Commits (30d): 0
Language:
License: MIT
No Package No Dependents
No Package No Dependents

About Medical-RAG-Chatbot

Ratnesh-181998/Medical-RAG-Chatbot

Medical RAG Question-Answering System built using LangChain, FAISS vector store, PyPDF, and Streamlit. Powered by Mistral open-source LLMs (HuggingFace) with custom context-aware chains. Includes a production-grade LLMOps/AIOps pipeline using Docker, Jenkins CI/CD, Aqua Trivy security scanning, and automated deployment on AWS App Runner.

About End-to-End-Medical-Chatbot

mdzaheerjk/End-to-End-Medical-Chatbot

Medical Chatbot using Retrieval-Augmented Generation (RAG) to answer medical queries. PDFs are converted into embeddings and stored in Pinecone. LangChain retrieves context for LLM responses. Built with Flask and deployable on AWS using Docker and GitHub Actions for scalable access.

This medical chatbot helps healthcare professionals and researchers quickly get answers to medical questions. You provide it with a collection of medical PDFs, and it uses those documents to generate accurate, context-aware responses to your queries. This is designed for anyone needing fast, reliable information retrieval from extensive medical literature.

medical-information healthcare-research clinical-support pharmacology-references literature-review

Scores updated daily from GitHub, PyPI, and npm data. How scores work