generative-computing/mellea
Mellea is a library for writing generative programs.
This is for developers building applications that use Large Language Models (LLMs) and need reliable, predictable outputs. It helps replace unpredictable LLM prompts and agent calls with structured, testable workflows. Developers provide text input and define the expected output structure using Python type annotations, receiving validated, guaranteed outputs like specific data fields (e.g., a user's name and age as an integer).
341 stars.
Use this if you are a developer struggling with inconsistent or incorrect outputs from LLMs in your applications and want to build more robust, testable AI-powered features.
Not ideal if you are looking for a no-code solution or a general-purpose LLM wrapper for quick experimentation without strict output requirements.
Stars
341
Forks
87
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/generative-computing/mellea"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINO™
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...
taco-group/OpenEMMA
OpenEMMA, a permissively licensed open source "reproduction" of Waymo’s EMMA model.