Azure-Samples/azure-ai-search-multimodal-sample

A sample app for the Multimodal Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power Q&A experiences.

54
/ 100
Established

This project helps you build custom AI assistants that can answer questions by understanding information from both text and images in your documents. It takes PDF documents containing text, diagrams, and other visuals, processes them, and allows your custom assistant to provide accurate, context-aware answers. This tool is for anyone who needs to quickly extract and reason over complex information from a mix of visual and textual content, like researchers, analysts, or knowledge managers.

Use this if you need to build a specialized Q&A application that can find answers and insights from both written content and visual elements (like charts or diagrams) within your PDF documents.

Not ideal if your primary need is to extract structured data from tables or if you are looking for a production-ready application without further development.

knowledge-management document-analysis information-retrieval business-intelligence research-assistants
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 21 / 25

How are scores calculated?

Stars

62

Forks

38

Language

Python

License

MIT

Category

dotnet-azure-rag

Last pushed

Jan 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Azure-Samples/azure-ai-search-multimodal-sample"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.