generative-computing/mellea

Mellea is a library for writing generative programs.

58
/ 100
Established

This is for developers building applications that use Large Language Models (LLMs) and need reliable, predictable outputs. It helps replace unpredictable LLM prompts and agent calls with structured, testable workflows. Developers provide text input and define the expected output structure using Python type annotations, receiving validated, guaranteed outputs like specific data fields (e.g., a user's name and age as an integer).

341 stars.

Use this if you are a developer struggling with inconsistent or incorrect outputs from LLMs in your applications and want to build more robust, testable AI-powered features.

Not ideal if you are looking for a no-code solution or a general-purpose LLM wrapper for quick experimentation without strict output requirements.

AI application development LLM integration data extraction reliable AI systems generative AI workflows
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 23 / 25

How are scores calculated?

Stars

341

Forks

87

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/generative-computing/mellea"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.