microsoft/promptpex

Test Generation for Prompts

51
/ 100
Established

This helps AI developers ensure their AI model prompts consistently produce the desired output. It takes a natural language prompt and its specified rules (like 'output should be JSON'), then automatically generates unit tests to check if different AI models follow these rules. Developers can use this to compare how well various models perform against the same prompt and rules.

158 stars.

Use this if you are a developer building AI applications and need to systematically test and compare how reliably different large language models (LLMs) adhere to the output requirements specified in your prompts.

Not ideal if you are a non-developer user looking for a no-code tool to create or improve prompts, as this is a technical testing utility.

AI development Prompt engineering AI model testing LLM evaluation Software quality assurance
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

158

Forks

19

Language

TeX

License

CC-BY-4.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/microsoft/promptpex"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.