maximhq/maxim-cookbooks
Maxim is an end-to-end AI evaluation and observability platform that empowers modern AI teams to ship agents with quality, reliability, and speed.
This project provides practical examples and code snippets for integrating Maxim, an AI evaluation and observability platform, into your AI agent development workflow. It helps you track agent behavior, manage prompts, simulate scenarios, and run automated tests. AI engineers and MLOps teams can use these examples to ensure their AI agents are high-quality, reliable, and perform as expected.
Use this if you are an AI engineer or MLOps specialist looking for ready-to-run code samples and configurations to integrate AI agent observability, auto-evaluation, prompt management, simulation, and test runs into your existing AI projects.
Not ideal if you are looking for a standalone AI agent framework or a general-purpose AI development library, as this focuses specifically on integrating with the Maxim evaluation platform.
Stars
13
Forks
8
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/maximhq/maxim-cookbooks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINO™
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
generative-computing/mellea
Mellea is a library for writing generative programs.
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...