T-Sunm/rag-ops

This project applies the core knowledge from the LLMOps module, including the design and implementation of the API Layer, Inference Layer, Observability Layer, Cache Layer, Guardrails Layer, Routing Layer, and the Data Ingestion Pipeline.

41
/ 100
Emerging

This project helps developers and MLOps engineers build robust and scalable chatbot systems that can answer questions using specific documents. It takes raw documents, processes them into a format a chatbot can understand, and then provides a secure, efficient, and monitored system for the chatbot to generate accurate responses. The end user is typically an MLOps engineer or a developer creating RAG-based applications for production environments.

Use this if you are an MLOps engineer or developer looking to build, understand, or deploy a production-ready RAG chatbot system that handles document ingestion, optimized inference, and safety features.

Not ideal if you are a business user looking for a pre-built chatbot solution without needing to manage the underlying infrastructure or development.

MLOps chatbot-development information-retrieval LLM-deployment AI-infrastructure
No License No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 7 / 25
Community 19 / 25

How are scores calculated?

Stars

70

Forks

17

Language

Python

License

Last pushed

Dec 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/T-Sunm/rag-ops"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.