microsoft/promptpex
Test Generation for Prompts
This helps AI developers ensure their AI model prompts consistently produce the desired output. It takes a natural language prompt and its specified rules (like 'output should be JSON'), then automatically generates unit tests to check if different AI models follow these rules. Developers can use this to compare how well various models perform against the same prompt and rules.
158 stars.
Use this if you are a developer building AI applications and need to systematically test and compare how reliably different large language models (LLMs) adhere to the output requirements specified in your prompts.
Not ideal if you are a non-developer user looking for a no-code tool to create or improve prompts, as this is a technical testing utility.
Stars
158
Forks
19
Language
TeX
License
CC-BY-4.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/microsoft/promptpex"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
dottxt-ai/outlines
Structured Outputs
takashiishida/arxiv-to-prompt
Transform arXiv papers into a single LaTeX source that can be used as a prompt for asking LLMs...
Spr-Aachen/LLM-PromptMaster
A simple LLM-Powered chatbot software.
AI-secure/aug-pe
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text
equinor/promptly
A prompt collection for testing and evaluation of LLMs.