Show simple item record

dc.contributor.authorShan, Caifeng
dc.date.accessioned2016-09-07T12:17:23Z
dc.date.available2016-09-07T12:17:23Z
dc.date.issued2008-02
dc.date.submitted2016-09-07T12:58:57.380Z
dc.identifier.citationShan, C. 2008. Inferring Facial and Body Language. Queen Mary University of Londonen_US
dc.identifier.issn1470-5559
dc.identifier.urihttp://www.eecs.qmul.ac.uk/tech_reports/RR-08-01.pdf
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/15020
dc.description.abstractMachine analysis of human facial and body language is a challenging topic in computer vision, impacting on important applications such as human-computer interaction and visual surveillance. In this thesis, we present research building towards computational frameworks capable of automatically understanding facial expression and behavioural body language. The thesis work commences with a thorough examination in issues surrounding facial representation based on Local Binary Patterns (LBP). Extensive experiments with different machine learning techniques demonstrate that LBP features are efficient and effective for person-independent facial expression recognition, even in low-resolution settings. We then present and evaluate a conditional mutual information based algorithm to efficiently learn the most discriminative LBP features, and show the best recognition performance is obtained by using SVM classifiers with the selected LBP features. However, the recognition is performed on static images without exploiting temporal behaviors of facial expression. Subsequently we present a method to capture and represent temporal dynamics of facial expression by discovering the underlying low-dimensional manifold. Locality Preserving Projections (LPP) is exploited to learn the expression manifold in the LBP based appearance feature space. By deriving a universal discriminant expression subspace using a supervised LPP, we can effectively align manifolds of different subjects on a generalised expression manifold. Different linear subspace methods are comprehensively evaluated in expression subspace learning. We formulate and evaluate a Bayesian framework for dynamic facial expression recognition employing the derived manifold representation. However, the manifold representation only addresses temporal correlations of the whole face image, does not consider spatial-temporal correlations among different facial regions. We then employ Canonical Correlation Analysis (CCA) to capture correlations among face parts. To overcome the inherent limitations of classical CCA for image data, we introduce and formalise a novel Matrix-based CCA (MCCA), which can better measure correlations in 2D image data. We show this technique can provide superior performance in regression and recognition tasks, whilst requiring significantly fewer canonical factors. All the above work focuses on facial expressions. However, the face is usually perceived not as an isolated object but as an integrated part of the whole body, and the visual channel combining facial and bodily expressions is most informative. Finally we investigate two understudied problems in body language analysis, gait-based gender discrimination and affective body gesture recognition. To effectively combine face and body cues, CCA is adopted to establish the relationship between the two modalities, and derive a semantic joint feature space for the feature-level fusion. Experiments on large data sets demonstrate that our multimodal systems achieve the superior performance in gender discrimination and affective state analysis.en_US
dc.description.sponsorshipResearch studentship of Queen Mary, the International Travel Grant of the Royal Academy of Engineering, and the Royal Society International Joint Project.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.subjectComputer visionen_US
dc.subjectFacial representationen_US
dc.subjectCanonical Correlation Analysisen_US
dc.titleInferring Facial and Body Languageen_US
dc.typeThesisen_US
dc.rights.holderThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4235]
    Theses Awarded by Queen Mary University of London

Show simple item record