Show simple item record

dc.contributor.authorZhang, Leshao
dc.date.accessioned2021-04-08T14:55:33Z
dc.date.available2021-04-08T14:55:33Z
dc.date.issued2020-10-28
dc.identifier.citationZhang, Leshao. 2020. Coordination of Nods in Dialogue. Queen Mary University of London.en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/71130
dc.descriptionPhD Thesisen_US
dc.description.abstractBehavioral mimicry has been claimed to be a nonconscious behavior that evokes prosocial e ects | liking, trust, empathy, persuasiveness | between interaction partners. Recently Intelligent Virtual Agents (IVAs) and Immersive Virtual Environments (IVEs) have provided rich new possibilities for nonverbal behavior studies such as mimicry studies. One of the best known e ects is the \Digital Chameleons" in which an IVA appears to be more persuasive if it automatically mimics a listener's head nods. However, this e ect has not been consistently replicated. This thesis explores the basis of the \chameleon e ects" using a customized IVE integrated with full-body motion capture system that support realtime behavior manipulation in the IVE. Two replications exploring the e ectiveness of the virtual speaker and head nodding behavior of interaction partners in the agent-listener interaction and avatar-listener interaction by manipulating the virtual speaker's head nods and provide mixed results. The rst experiment fails to replicate the original nding of mimicry leading to higher ratings of an agent's e ectiveness. The second experiment shows a higher rating for agreement with a mimicking avatar. Overall, an avatar speaker appears more likely to activate an e ect of behavioral mimicry than an agent speaker, probably because the avatar speaker provides richer nonverbal cues than the agent speaker. Detailed analysis of the motion data for speaker and listener head movements reveals systematic di erences in a) head nodding between a speaker producing a monologue and a speaker engaged in a dialogue b) head nodding of speakers and listeners in the high and low frequency domain and c) the reciprocal dynamics of head-nodding with di erent virtual speaker's head nodding behavior. We conclude that: i) the activation of behavioral mimicry requires a certain number of nonverbal cues, ii) speakers behave di erently in monologue and dialogue, iii) speakers and listeners nod asymmetrically in di erent frequency domains, iv) the coordination of head nods in natural dialogue is no more than we would expect by chance, v) speakers' and listeners' head nods become coordinated by spontaneous collaborative adjustment of their head nods.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.titleCoordination of Nods in Dialogue.en_US
dc.typeThesisen_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4192]
    Theses Awarded by Queen Mary University of London

Show simple item record