OFA-Sys/Ditto
A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment".
This project offers a method called DITTO to enhance how large language models (LLMs) pretend to be different characters. It takes an LLM's existing knowledge of various roles and their typical conversations to create a large training dataset. The outcome is an LLM that can maintain a consistent persona and provide accurate, role-specific information across multi-turn dialogues, suitable for anyone building or using advanced conversational AI.
211 stars. No commits in the last 6 months.
Use this if you are developing or evaluating LLMs and need to ensure they can consistently and accurately portray specific characters or personas in conversations.
Not ideal if you require an evaluation based purely on human preference, or if you need to assess subtle emotional nuances or subjective 'attractiveness' in role-play beyond objective consistency and factual accuracy.
Stars
211
Forks
18
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OFA-Sys/Ditto"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
zjunlp/CaKE
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
zjunlp/AutoSteer
[EMNLP 2025] AutoSteer: Automating Steering for Safe Multimodal Large Language Models
VinAIResearch/HPR
Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with...