promptpex and promptly

These are complementary tools: PromptPEx provides a framework for *generating* systematic test cases for prompts, while Promptly supplies a curated *collection* of pre-made prompts to evaluate—one automates test creation, the other supplies evaluation material.

promptpex
51
Established
promptly
43
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 15/25
Maintenance 10/25
Adoption 7/25
Maturity 16/25
Community 10/25
Stars: 158
Forks: 19
Downloads:
Commits (30d): 0
Language: TeX
License: CC-BY-4.0
Stars: 27
Forks: 3
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: CC-BY-4.0
No Package No Dependents
No Package No Dependents

About promptpex

microsoft/promptpex

Test Generation for Prompts

This helps AI developers ensure their AI model prompts consistently produce the desired output. It takes a natural language prompt and its specified rules (like 'output should be JSON'), then automatically generates unit tests to check if different AI models follow these rules. Developers can use this to compare how well various models perform against the same prompt and rules.

AI development Prompt engineering AI model testing LLM evaluation Software quality assurance

About promptly

equinor/promptly

A prompt collection for testing and evaluation of LLMs.

This collection provides pre-written prompts for evaluating and testing Large Language Models (LLMs). It helps you put different LLMs through their paces, feeding them specific questions and scenarios to assess their performance and responses. Scientific programmers and researchers who work with AI models would find this useful for benchmarking and understanding LLM capabilities.

LLM-evaluation AI-testing prompt-benchmarking AI-research natural-language-processing

Scores updated daily from GitHub, PyPI, and npm data. How scores work