Show simple item record

dc.contributor.authorFernandez Arguedas, Virginia
dc.date.accessioned2013-02-01T14:54:36Z
dc.date.available2013-02-01T14:54:36Z
dc.date.issued2012
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/3354
dc.descriptionPhDen_US
dc.description.abstractThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of London
dc.subjectOncogenic microRNAsen_US
dc.subjectcell cyclesen_US
dc.subjecttumour suppressor genesen_US
dc.subjectHuman Mammary Epithelial Cells.en_US
dc.titleAutomatic object classification for surveillance videos.en_US
dc.typeThesisen_US
dc.rights.holderThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Theses [4122]
    Theses Awarded by Queen Mary University of London

Show simple item record