ai-dial and ai-dial-core
The core backend service and its documentation are **ecosystem siblings** — the documentation provides guidance for deploying and integrating the unified API that the core component exposes.
About ai-dial
epam/ai-dial
Documentation for AI DIAL
This project helps developers and DevOps engineers deploy and manage a unified platform for large language model (LLM) applications. It takes various LLM services (Azure OpenAI, AWS Bedrock, GCP VertexAI, or self-hosted models) and provides a customizable, integrated chat application with administrative controls. The primary users are developers building LLM-powered applications and operations teams responsible for deploying and maintaining them.
About ai-dial-core
epam/ai-dial-core
The main component of AI DIAL, which provides unified API to different chat completion and embedding models, assistants, and applications
DIAL Core acts as a central hub for connecting various AI large language models, like chat completion and embedding models, to different applications. It takes requests from your applications and routes them to the appropriate AI model, then delivers the model's response back to your application. This is ideal for developers or platform engineers building applications that need to interact with multiple AI models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work