Show simple item record

dc.contributor.authorTahir, Syed Fahad
dc.date.accessioned2018-03-28T12:23:04Z
dc.date.available2018-03-28T12:23:04Z
dc.date.issued2016-03-02
dc.date.submitted2018-03-28T13:15:06.659Z
dc.identifier.citationTahir, S.F. 2016. Resource-constrained re-identification in camera networks. Queen Mary University of Londonen_US
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/36123
dc.descriptionPhDen_US
dc.description.abstractIn multi-camera surveillance, association of people detected in different camera views over time, known as person re-identification, is a fundamental task. Re-identification is a challenging problem because of changes in the appearance of people under varying camera conditions. Existing approaches focus on improving the re-identification accuracy, while no specific effort has yet been put into efficiently utilising the available resources that are normally limited in a camera network, such as storage, computation and communication capabilities. In this thesis, we aim to perform and improve the task of re-identification under constrained resources. More specifically, we reduce the data needed to represent the appearance of an object through a proposed feature selection method and a difference-vector representation method. The proposed feature-selection method considers the computational cost of feature extraction and the cost of storing the feature descriptor jointly with the feature’s re-identification performance to select the most cost-effective and well-performing features. This selection allows us to improve inter-camera re-identification while reducing storage and computation requirements within each camera. The selected features are ranked in the order of effectiveness, which enable a further reduction by dropping the least effective features when application constraints require this conformity. We also reduce the communication overhead in the camera network by transferring only a difference vector, obtained from the extracted features of an object and the reference features within a camera, as an object representation for the association. In order to reduce the number of possible matches per association, we group the objects appearing within a defined time-interval in un-calibrated camera pairs. Such a grouping improves the re-identification, since only those objects that appear within the same time-interval in a camera pair are needed to be associated. For temporal alignment of cameras, we exploit differences between the frame numbers of the detected objects in a camera pair. Finally, in contrast to pairwise camera associations used in literature, we propose a many-to-one camera association method for re-identification, where multiple cameras can be candidates for having generated the previous detections of an object. We obtain camera-invariant matching scores from the scores obtained using the pairwise re-identification approaches. These scores measure the chances of a correct match between the objects detected in a group of cameras. Experimental results on publicly available and in-lab multi-camera image and video datasets show that the proposed methods successfully reduce storage, computation and communication requirements while improving the re-identification rate compared to existing re-identification approaches.en_US
dc.language.isoen_USen_US
dc.publisherQueen Mary University of Londonen_US
dc.rightsThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author
dc.subjectElectronic Engineering and Computer Scienceen_US
dc.subjectmulti-camera surveillanceen_US
dc.subjectperson re-identificationen_US
dc.titleResource-constrained re-identification in camera networksen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4116]
    Theses Awarded by Queen Mary University of London

Show simple item record