kddubey/cappr
Completion After Prompt Probability. Make your LLM make a choice
This project helps you get specific, structured answers from large language models (LLMs) by forcing them to choose from a predefined list of options. You provide the LLM with a prompt and a set of possible completions, and it tells you which completion it 'prefers' or the probability of each completion. This is ideal for anyone looking to extract precise, categorized information from LLM outputs, such as market researchers, content moderators, or customer service analysts.
Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you need an LLM to consistently pick a single, correct answer or categorize information from a predefined set of choices, rather than generating freeform text.
Not ideal if your task requires the LLM to generate creative, open-ended responses without constraints on its output.
Stars
82
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 02, 2024
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/kddubey/cappr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
linshenkx/prompt-optimizer
一款提示词优化器,助力于编写高质量的提示词
Undertone0809/promptulate
🚀Lightweight Large language model automation and Autonomous Language Agents development...
CTLab-ITMO/CoolPrompt
Automatic Prompt Optimization Framework
microsoft/sammo
A library for prompt engineering and optimization (SAMMO = Structure-aware Multi-Objective...
Eladlev/AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration