Show simple item record

dc.contributor.authorKoelstra, Reinder Alexander Lambertus
dc.date.accessioned2015-09-07T15:21:51Z
dc.date.available2015-09-07T15:21:51Z
dc.date.issued2012-03
dc.identifier.citationKoelstra, R.A.L. 2012 .Affective and Implicit Tagging using Facial Expressions and Electroencephalography. Queen Mary University of London.en_US
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/8481
dc.descriptionPhDen_US
dc.description.abstractRecent years have seen an explosion of user-generated, untagged multimedia data, generating a need for efficient search and retrieval of this data. The predominant method for content-based tagging is through manual annotation. Consequently, automatic tagging is currently the subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the multimedia content are analysed in order to generate descriptive tags. We approach this problem through the modalities of facial expressions and EEG signals. We investigate tag validation and affective tagging using EEG signals. The former relies on the detection of event-related potentials triggered in response to the presentation of invalid tags alongside multimedia material. We demonstrate significant differences in users' EEG responses for valid versus invalid tags, and present results towards single-trial classification. For affective tagging, we propose methodologies to map EEG signals onto the valence-arousal space and perform both binary classification as well as regression into this space. We apply these methods in a real-time affective recommendation system. We also investigate the analysis of facial expressions for implicit tagging. This relies on a dynamic texture representation using non-rigid registration that we first evaluate on the problem of facial action unit recognition. We present results on well-known datasets (with both posed and spontaneous expressions) comparable to the state of the art in the field. Finally, we present a multi-modal approach that fuses both modalities for affective tagging. We perform classification in the valence-arousal space based on these modalities and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.subjectElectronic Engineeringen_US
dc.titleAffective and Implicit Tagging using Facial Expressions and Electroencephalography.en_US
dc.typeThesisen_US
dc.rights.holderThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [3366]
    Theses Awarded by Queen Mary University of London

Show simple item record