skywalker023/prosocial-dialog

🐥 Code and Dataset for our EMNLP 2022 paper - "ProsocialDialog: A Prosocial Backbone for Conversational Agents"

24
/ 100
Experimental

This project provides a unique dataset designed to help conversation designers and AI developers build more ethical and helpful AI chatbots. It takes potentially unsafe or harmful conversational utterances and provides suggested prosocial responses, along with detailed safety labels and explanations. The end-user is anyone developing or improving conversational AI systems who wants their bots to interact safely and constructively.

No commits in the last 6 months.

Use this if you are developing AI chatbots or virtual assistants and need a dataset to train them to identify and respond to unsafe or non-prosocial user input with helpful, ethical guidance.

Not ideal if you are looking for a general-purpose dialogue dataset for casual conversations without a specific focus on safety and prosocial behavior.

conversational-ai chatbot-design ai-safety ethical-ai dialogue-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

65

Forks

Language

Python

License

MIT

Last pushed

Aug 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/skywalker023/prosocial-dialog"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.