kddubey/cappr

Completion After Prompt Probability. Make your LLM make a choice

41
/ 100
Emerging

This project helps you get specific, structured answers from large language models (LLMs) by forcing them to choose from a predefined list of options. You provide the LLM with a prompt and a set of possible completions, and it tells you which completion it 'prefers' or the probability of each completion. This is ideal for anyone looking to extract precise, categorized information from LLM outputs, such as market researchers, content moderators, or customer service analysts.

Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you need an LLM to consistently pick a single, correct answer or categorize information from a predefined set of choices, rather than generating freeform text.

Not ideal if your task requires the LLM to generate creative, open-ended responses without constraints on its output.

text-classification structured-extraction content-moderation survey-analysis AI-evaluation
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

82

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Nov 02, 2024

Commits (30d)

0

Dependencies

2

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/kddubey/cappr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.