MantisAI/prompt_engineering
Code that accompanies the PyData New York (2022) talk: Addressing the sensitivity of Large language models
This project helps you understand how different ways of asking questions (prompts) affect the answers you get from Large Language Models for various natural language processing tasks. You provide specific prompts and receive an evaluation of how well the language model performed. This is for anyone who uses large language models for tasks like text summarization, question answering, or content generation and wants to optimize their prompts for better results.
No commits in the last 6 months.
Use this if you are working with Large Language Models and need to systematically test and compare how different prompts influence their output quality for various NLP challenges.
Not ideal if you don't have access to Mantis AWS or are looking for a simple, out-of-the-box prompting solution without detailed evaluation.
Stars
13
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Nov 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MantisAI/prompt_engineering"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
panaversity/learn-modern-ai-python
Learn Modern AI Assisted Python with Type Hints
microsoft/PromptCraft-Robotics
Community for applying LLMs to robotics and a robot simulator with ChatGPT integration
isLinXu/prompt-engineering-note
🔥🔔prompt-engineering-note🔔🔥
aloth/RogueGPT
RogueGPT: A controlled stimulus generator for AI news authenticity research. (arXiv:2601.21963...
shun0t/versatile_bot_project
Transforming NotebookLM into a versatile bot