OFA-Sys/Ditto

A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment".

39
/ 100
Emerging

This project offers a method called DITTO to enhance how large language models (LLMs) pretend to be different characters. It takes an LLM's existing knowledge of various roles and their typical conversations to create a large training dataset. The outcome is an LLM that can maintain a consistent persona and provide accurate, role-specific information across multi-turn dialogues, suitable for anyone building or using advanced conversational AI.

211 stars. No commits in the last 6 months.

Use this if you are developing or evaluating LLMs and need to ensure they can consistently and accurately portray specific characters or personas in conversations.

Not ideal if you require an evaluation based purely on human preference, or if you need to assess subtle emotional nuances or subjective 'attractiveness' in role-play beyond objective consistency and factual accuracy.

conversational-ai persona-simulation llm-evaluation role-playing dialogue-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

211

Forks

18

Language

Jupyter Notebook

License

MIT

Last pushed

May 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OFA-Sys/Ditto"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.