Show simple item record

dc.contributor.authorKhalid, ObaidUllah
dc.date.accessioned2017-09-26T13:03:40Z
dc.date.available2017-09-26T13:03:40Z
dc.date.issued2017-05-25
dc.date.submitted2017-09-26T12:41:54.290Z
dc.identifier.citationKhalid, O. 2017. Performance evaluation for tracker-level fusion in video tracking. Queen Mary University of Londonen_US
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/25901
dc.descriptionPhDen_US
dc.description.abstractTracker-level fusion for video tracking combines outputs (state estimations) from multiple trackers, to address the shortcomings of individual trackers. Furthermore, performance evaluation of trackers at run time (online) can determine low performing trackers that can be removed from the fusion. This thesis presents a tracker-level fusion framework that performs online tracking performance evaluation for fusion. We first introduce a method to determine time instants of tracker failure that is divided into two steps. First, we evaluate tracking performance by comparing the distributions of the tracker state and a region around the state. We use Distribution Fields to generate the distributions of both regions and compute a tracking performance score by comparing the distributions using the L1 distance. Then, we model this score as a time series and employ the Auto Regressive Moving Average method to forecast future values of the performance score. A difference between the original and forecast returns the forecast error signal that we use to detect tracking failure. We test the method with different datasets and then demonstrate its flexibility using tracking results and sequences from the Visual Object Tracking (VOT) challenge. The second part presents a tracker-level fusion method that combines the outputs of multiple trackers. The method is divided into three steps. First, we group trackers into clusters based on the spatio-temporal pair-wise relationships of their outputs. Then, we evaluate tracking performance based on reverse-time analysis with an adaptive reference frame and define the cluster with trackers that appear to be successfully following the target as the on-target cluster. Finally, we fuse the outputs of the trackers in the on-target cluster to obtain the final target state. The fusion approach uses standard tracker outputs and can therefore combine various types of trackers. We test the method with several combinations of state-of-the-art trackers, and also compare it with individual trackers and other fusion approaches.en_US
dc.description.sponsorshipEACEA, under the EMJD ICE Project.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.rightsThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author
dc.subjectElectronic Engineering and Computer Scienceen_US
dc.subjectvideo trackingen_US
dc.subjectTracker-level fusionen_US
dc.titlePerformance evaluation for tracker-level fusion in video trackingen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [3584]
    Theses Awarded by Queen Mary University of London

Show simple item record