Semantic Spaces for Video Analysis of Behaviour
Abstract
There are ever growing interests from the computer vision community into human behaviour
analysis based on visual sensors. These interests generally include: (1) behaviour recognition -
given a video clip or specific spatio-temporal volume of interest discriminate it into one or more
of a set of pre-defined categories; (2) behaviour retrieval - given a video or textual description
as query, search for video clips with related behaviour; (3) behaviour summarisation - given a
number of video clips, summarise out representative and distinct behaviours. Although countless
efforts have been dedicated into problems mentioned above, few works have attempted to
analyse human behaviours in a semantic space. In this thesis, we define semantic spaces as a
collection of high-dimensional Euclidean space in which semantic meaningful events, e.g. individual
word, phrase and visual event, can be represented as vectors or distributions which are
referred to as semantic representations. With the semantic space, semantic texts, visual events
can be quantitatively compared by inner product, distance and divergence. The introduction of
semantic spaces can bring lots of benefits for visual analysis. For example, discovering semantic
representations for visual data can facilitate semantic meaningful video summarisation, retrieval
and anomaly detection. Semantic space can also seamlessly bridge categories and datasets which
are conventionally treated independent. This has encouraged the sharing of data and knowledge
across categories and even datasets to improve recognition performance and reduce labelling effort.
Moreover, semantic space has the ability to generalise learned model beyond known classes
which is usually referred to as zero-shot learning. Nevertheless, discovering such a semantic
space is non-trivial due to (1) semantic space is hard to define manually. Humans always have
a good sense of specifying the semantic relatedness between visual and textual instances. But a
measurable and finite semantic space can be difficult to construct with limited manual supervision.
As a result, constructing semantic space from data is adopted to learn in an unsupervised
manner; (2) It is hard to build a universal semantic space, i.e. this space is always contextual
dependent. So it is important to build semantic space upon selected data such that it is always
meaningful within the context. Even with a well constructed semantic space, challenges are still
present including; (3) how to represent visual instances in the semantic space; and (4) how to mitigate
the misalignment of visual feature and semantic spaces across categories and even datasets
when knowledge/data are generalised. This thesis tackles the above challenges by exploiting data
from different sources and building contextual semantic space with which data and knowledge
can be transferred and shared to facilitate the general video behaviour analysis.
To demonstrate the efficacy of semantic space for behaviour analysis, we focus on studying
real world problems including surveillance behaviour analysis, zero-shot human action recognition
and zero-shot crowd behaviour recognition with techniques specifically tailored for the
nature of each problem.
Firstly, for video surveillances scenes, we propose to discover semantic representations from
the visual data in an unsupervised manner. This is due to the largely availability of unlabelled
visual data in surveillance systems. By representing visual instances in the semantic space, data
and annotations can be generalised to new events and even new surveillance scenes. Specifically,
to detect abnormal events this thesis studies a geometrical alignment between semantic representation
of events across scenes. Semantic actions can be thus transferred to new scenes and
abnormal events can be detected in an unsupervised way. To model multiple surveillance scenes
simultaneously, we show how to learn a shared semantic representation across a group of semantic
related scenes through a multi-layer clustering of scenes. With multi-scene modelling we
show how to improve surveillance tasks including scene activity profiling/understanding, crossscene
query-by-example, behaviour classification, and video summarisation.
Secondly, to avoid extremely costly and ambiguous video annotating, we investigate how
to generalise recognition models learned from known categories to novel ones, which is often
termed as zero-shot learning. To exploit the limited human supervision, e.g. category names,
we construct the semantic space via a word-vector representation trained on large textual corpus
in an unsupervised manner. Representation of visual instance in semantic space is obtained by
learning a visual-to-semantic mapping. We notice that blindly applying the mapping learned
from known categories to novel categories can cause bias and deteriorating the performance
which is termed as domain shift. To solve this problem we employed techniques including semisupervised
learning, self-training, hubness correction, multi-task learning and domain adaptation.
All these methods in combine achieve state-of-the-art performance in zero-shot human action
task.
In the last, we study the possibility to re-use known and manually labelled semantic crowd
attributes to recognise rare and unknown crowd behaviours. This task is termed as zero-shot
crowd behaviours recognition. Crucially we point out that given the multi-labelled nature of
semantic crowd attributes, zero-shot recognition can be improved by exploiting the co-occurrence
between attributes.
To summarise, this thesis studies methods for analysing video behaviours and demonstrates
that exploring semantic spaces for video analysis is advantageous and more importantly enables
multi-scene analysis and zero-shot learning beyond conventional learning strategies.
Authors
Xu, XunCollections
- Theses [3651]