MantisAI/prompt_engineering

Code that accompanies the PyData New York (2022) talk: Addressing the sensitivity of Large language models

13
/ 100
Experimental

This project helps you understand how different ways of asking questions (prompts) affect the answers you get from Large Language Models for various natural language processing tasks. You provide specific prompts and receive an evaluation of how well the language model performed. This is for anyone who uses large language models for tasks like text summarization, question answering, or content generation and wants to optimize their prompts for better results.

No commits in the last 6 months.

Use this if you are working with Large Language Models and need to systematically test and compare how different prompts influence their output quality for various NLP challenges.

Not ideal if you don't have access to Mantis AWS or are looking for a simple, out-of-the-box prompting solution without detailed evaluation.

Large Language Models Natural Language Processing prompt optimization AI model evaluation text generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Jupyter Notebook

License

Last pushed

Nov 07, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MantisAI/prompt_engineering"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.