Show simple item record

dc.contributor.authorKordopatis-Zilos, Gen_US
dc.contributor.authorPapadopoulos, Sen_US
dc.contributor.authorPatras, Ien_US
dc.contributor.authorKompatsiaris, Ien_US
dc.contributor.authorInternational Conference on Computer Visionen_US
dc.date.accessioned2019-12-13T14:18:42Z
dc.date.issued2019-10-31en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/61984
dc.description.abstractIn this paper we introduce ViSiL, a Video Similarity Learning architecture that considers fine-grained SpatioTemporal relations between pairs of videos – such relations are typically lost in previous video retrieval approaches that embed the whole frame or even the whole video into a vector descriptor before the similarity estimation. By contrast, our Convolutional Neural Network (CNN)-based approach is trained to calculate video-to-video similarity from refined frame-to-frame similarity matrices, so as to consider both intra- and inter-frame relations. In the proposed method, pairwise frame similarity is estimated by applying Tensor Dot (TD) followed by Chamfer Similarity (CS) on regional CNN frame features - this avoids feature aggregation before the similarity calculation between frames. Subsequently, the similarity matrix between all video frames is fed to a four-layer CNN, and then summarized using Chamfer Similarity (CS) into a video-to-video similarity score – this avoids feature aggregation before the similarity calculation between videos and captures the temporal similarity patterns between matching frame sequences. We train the proposed network using a triplet loss scheme and evaluate it on five public benchmark datasets on four different video retrieval problems where we demonstrate large improvements in comparison to the state of the art. The implementation of ViSiL is publicly available.en_US
dc.format.extent6351 - 6360 (10)en_US
dc.rightsThis article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
dc.titleViSiL: Fine-grained Spatio-Temporal Video Similarity Learningen_US
dc.typeConference Proceeding
dc.rights.holder© 2019 The Author(s)
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record