The WOCHAT series has recently joined efforts with the Dialogue Breakdown Detection Challenge with the objective of continuing generating resources that can be made publicly availabe for further research and experimentation. In this shared task, participants are invited to develop technologies for detecting dialogue breakdowns on human-chatbot dialogue sessions.
In this edition, in addition to the original detection task, new error classification and sentence generation tasks will be added including datasets from chateval and newly collected human-chatbot dialogues in languages other than English.
In this track, participants are required to build dialogue breakdown detection systems to estimate the probability distribution of the three labels: "breakdown", "possible breakdown", and "no breakdown" for each turn in a dialogue given its previous turn and history context. This track includes dialogues in English (600 dialogues) and Japanese (400 dialogues).
In this track, participants are required to build a classifier to identify the error type of a dialogue breakdown event given its previous turn and history context. This track includes 600 dialogues in Japanese from past Dialogue Breakdown Detection Challenges.
In this track, participants are required to build a response generator/selector. The system should be able to provide new responses aiming at correcting or recovering from a dialogue breakdown event. This track includes 600 dialogues in English from past Dialogue Breakdown Detection Challenges.
You can register for participating in the shared task at https://my.chateval.org/accounts/login/ and, once registered, you will be able to download the datasets and readme documents as well as submit your results at https://chateval.org/shared_task.
If you have further questions regarding the data and the shared tasks, please contact us at the following email address: firstname.lastname@example.org.