Azure-Samples/azure-ai-search-multimodal-sample
A sample app for the Multimodal Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power Q&A experiences.
This project helps you build custom AI assistants that can answer questions by understanding information from both text and images in your documents. It takes PDF documents containing text, diagrams, and other visuals, processes them, and allows your custom assistant to provide accurate, context-aware answers. This tool is for anyone who needs to quickly extract and reason over complex information from a mix of visual and textual content, like researchers, analysts, or knowledge managers.
Use this if you need to build a specialized Q&A application that can find answers and insights from both written content and visual elements (like charts or diagrams) within your PDF documents.
Not ideal if your primary need is to extract structured data from tables or if you are looking for a production-ready application without further development.
Stars
62
Forks
38
Language
Python
License
MIT
Category
Last pushed
Jan 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Azure-Samples/azure-ai-search-multimodal-sample"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
nashtech-garage/ntg-agent
A sample Chatbot in C# using Microsoft Agent Framework
shuyu-labs/AntSK
An AI knowledge base/agent built with .Net 9, AntBlazor, Semantic Kernel, and Kernel Memory,...
Azure-Samples/contoso-real-estate
Intelligent enterprise-grade reference architecture for JavaScript, featuring OpenAI...
wisedev-code/MaIN.NET
NuGet package designed to make LLMs, RAG, and Agents first-class citizens in .NET
shuyu-labs/GraphRag.Net
参考GraphRag使用 Semantic Kernel 来实现的dotnet版本,可以使用NuGet开箱即用集成到项目中