nicolay-r/llm-prompt-checking
Toolset for checking differences in recognising semantic relation presence by: (1) large language models 🤖 and (2) annotators / experts ✍️
No commits in the last 6 months.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/nicolay-r/llm-prompt-checking"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ExpertiseModel/MuTAP
MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model
INPVLSA/probefish
A web-based LLM prompt and endpoint testing platform. Organize, version, test, and validate...
thabit-ai/thabit
Thabit is platform to evaluate prompts on multiple LLMs to determine the best one for your data
alexandrughinea/lm-tiny-prompt-evaluation-framework
This project provides a tiny framework for testing different prompt versions with various AI...