MikeWangWZHL/Solo-Performance-Prompting

Repo for paper "Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration"

33
/ 100
Emerging

This project helps evaluate how large language models (LLMs) perform on complex tasks like creative writing, collaborative games, and logic puzzles. It takes a task description and an LLM's output, then evaluates the LLM's ability to solve problems by having it adopt multiple internal 'personas' that collaborate. The intended users are AI researchers and prompt engineers who want to understand and improve LLM problem-solving capabilities.

349 stars. No commits in the last 6 months.

Use this if you are an AI researcher or prompt engineer looking to experiment with and evaluate advanced prompting techniques that enable LLMs to tackle complex, multi-step reasoning or creative generation tasks more effectively.

Not ideal if you are looking for an out-of-the-box application for end-users, or if your primary goal is to fine-tune an LLM rather than explore prompting strategies.

AI-research prompt-engineering LLM-evaluation cognitive-science-AI natural-language-processing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

349

Forks

32

Language

Python

License

Last pushed

May 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MikeWangWZHL/Solo-Performance-Prompting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.