equinor/promptly
A prompt collection for testing and evaluation of LLMs.
This collection provides pre-written prompts for evaluating and testing Large Language Models (LLMs). It helps you put different LLMs through their paces, feeding them specific questions and scenarios to assess their performance and responses. Scientific programmers and researchers who work with AI models would find this useful for benchmarking and understanding LLM capabilities.
Use this if you need a structured set of prompts to systematically test and compare various Large Language Models.
Not ideal if you are looking for a tool to generate prompts automatically or for general-purpose prompt engineering outside of evaluation.
Stars
27
Forks
3
Language
Jupyter Notebook
License
CC-BY-4.0
Category
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/equinor/promptly"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
dottxt-ai/outlines
Structured Outputs
takashiishida/arxiv-to-prompt
Transform arXiv papers into a single LaTeX source that can be used as a prompt for asking LLMs...
microsoft/promptpex
Test Generation for Prompts
Spr-Aachen/LLM-PromptMaster
A simple LLM-Powered chatbot software.
AI-secure/aug-pe
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text