Img

Workshops and Session Series on Chatbots and Conversational Agents

Shared Task Description

As part of the WOCHAT workshop series, a Shared Task on Data Collection, Annotation and Evaluation is being conducted. In this task, participants are required to generate human-machine and human-human dialogues, as well as to produce turn-level annotations on them. Human-machine dialogues can be generated by using different online and offline chat engines made available for such purpose. Annotations are generated following the provided set of guidelines. The collected dataset is publicly available to the research community for further research and experimentation in future editions of the workshop. The most recent version of the dataset is available in this link.

Metrics and Resources for Chat-oriented Dialogue Evaluation

WOCHAT's Shared Task is part of a larger scope initiative aiming at both collecting chat-oriented dialogue data that can be made available for research purposes and developing a framework for the automatic evaluation of chat-oriented dialogue. This effort comprises three interdependent tasks:

Ways of Participation and Registration

There are four different ways of participating in the shared task:

You can register for participating in one or more of the roles described above by using this form.