Show simple item record

dc.contributor.authorVrochidis, Stefanos
dc.date.accessioned2015-09-16T11:02:50Z
dc.date.available2015-09-16T11:02:50Z
dc.date.issued2013-03
dc.identifier.citationVrochidis, S. 2013. Interactive video retrieval using implicit user feedback. Queen Mary University of London.en_US
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/8729
dc.descriptionPhDen_US
dc.description.abstractIn the recent years, the rapid development of digital technologies and the low cost of recording media have led to a great increase in the availability of multimedia content worldwide. This availability places the demand for the development of advanced search engines. Traditionally, manual annotation of video was one of the usual practices to support retrieval. However, the vast amounts of multimedia content make such practices very expensive in terms of human effort. At the same time, the availability of low cost wearable sensors delivers a plethora of user-machine interaction data. Therefore, there is an important challenge of exploiting implicit user feedback (such as user navigation patterns and eye movements) during interactive multimedia retrieval sessions with a view to improving video search engines. In this thesis, we focus on automatically annotating video content by exploiting aggregated implicit feedback of past users expressed as click-through data and gaze movements. Towards this goal, we have conducted interactive video retrieval experiments, in order to collect click-through and eye movement data in not strictly controlled environments. First, we generate semantic relations between the multimedia items by proposing a graph representation of aggregated past interaction data and exploit them to generate recommendations, as well as to improve content-based search. Then, we investigate the role of user gaze movements in interactive video retrieval and propose a methodology for inferring user interest by employing support vector machines and gaze movement-based features. Finally, we propose an automatic video annotation framework, which combines query clustering into topics by constructing gaze movement-driven random forests and temporally enhanced dominant sets, as well as video shot classification for predicting the relevance of viewed items with respect to a topic. The results show that exploiting heterogeneous implicit feedback from past users is of added value for future users of interactive video retrieval systems.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.subjectElectronic Engineeringen_US
dc.subjectVideo annotationen_US
dc.subjectVideo retrievalen_US
dc.titleInteractive video retrieval using implicit user feedback.en_US
dc.typeThesisen_US
dc.rights.holderThe copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4223]
    Theses Awarded by Queen Mary University of London

Show simple item record