reasoning-machines/prompt-lib

A set of utilities for running few-shot prompting experiments on large-language models

42
/ 100
Emerging

This tool helps researchers and AI practitioners efficiently test and evaluate various prompting strategies for large language models. You provide example inputs and desired outputs (a "few-shot prompt") and it processes these prompts in bulk through different language models. The output is a structured log of the model's responses to your prompts, facilitating systematic comparison and analysis of performance.

126 stars. No commits in the last 6 months.

Use this if you need to run large-scale experiments to compare how different prompts or language models perform on specific tasks, like for academic research or model benchmarking.

Not ideal if your goal is to build a production application that uses language models or if you only need to query a language model a few times; for those cases, other libraries might be more suitable.

AI Research Prompt Engineering Model Benchmarking Natural Language Processing Experimentation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

126

Forks

18

Language

Python

License

MIT

Last pushed

Oct 25, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/reasoning-machines/prompt-lib"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.