Toward a conversational model for counsel robots: how different question types elicit different linguistic behaviors

In recent years, robots have been playing the role of counselor or conversational partner in everyday dialogues and interactions with humans. For successful human–robot communication, it is very important to identify the best conversational strategies that can influence the responses of the human cl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Intelligent service robotics 2021-07, Vol.14 (3), p.373-385
Hauptverfasser: Choi, Sujin, Lee, Hanna, Lim, Yoonseob, Choi, Jongsuk, Sung, Jee Eun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, robots have been playing the role of counselor or conversational partner in everyday dialogues and interactions with humans. For successful human–robot communication, it is very important to identify the best conversational strategies that can influence the responses of the human client in human–robot interactions. The purpose of the present study is to examine linguistic behaviors in human–human conversation using chatting data to provide the best model for effective conversation in human–robot interaction. We analyzed conversational data by categorizing them into question types, namely Wh-questions and “yes” or “no” (YN) questions, and their correspondent linguistic behaviors (self-disclosure elicitation, self-disclosure, simple “yes” or “no” answers, and acknowledgment). We also compared the utterance length of clients depending on the question type. In terms of linguistic behaviors, the results reveal that the Wh-question type elicited significantly higher rates of self-disclosure elicitation and acknowledgment than YN-questions. Among the Wh-subtype, how was found to promote more linguistic behaviors such as self-disclosure elicitation, self-disclosure, and acknowledgment than other Wh-subtypes. On the other hand, YN-questions generated significantly higher rates of simple “yes” or “no” answers compared to the Wh-question. In addition, Wh-question type elicited longer utterance than the YN-question type. We suggested that the type of questions of the robot counselor must be considered to elicit various linguistic behaviors and utterances of humans. Our research is meaningful in providing efficient conversation strategies for robot utterances that conform to humans’ linguistic behaviors.
ISSN:1861-2776
1861-2784
DOI:10.1007/s11370-021-00375-6