Search
Now showing items 1-9 of 9
Towards a Music Language Model for Audio Analysis
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
Polyphonic Automatic Music Transcription remains a challenging problem. Many studies focus on the extraction of features from audio signals; we focus here on Music Language Models that help turn those features into a ...
Automatic Transcription of Vocal Quartets
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
This work presents a probabilistic latent component analysis (PLCA) method applied to automatic music transcription of a cappella performances of vocal quartets. A variable-Q transform (VQT) representation of the audio ...
Classification of Piano Pedaling Techniques Using Gesture Data from a Non-Intrusive Measurement System
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
This paper presents the results of a study of piano pedaling techniques on the sustain pedal using a newly designed measurement system. This system is comprised of an optical sensor mounted in the pedal bearing block and ...
Working Toward Computer-Augmented Music Traditions
(2016-12-20)
We discuss our work in modelling and generating music transcriptions using deep recurrent neural networks. In contrast to similar work, we focus on creating a rich evaluation methodology that seeks to address questions ...
Explaining Predictions of Machine Listening Systems
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
We adapt local, interpretable and model-agnostic explanations [1] for use with a machine listening system, and demonstrate it for singing voice detection. Such explanations provide ways to understand the behaviour of machine ...
Performable Spectral Synthesis via Low-Dimensional Modelling and Control Mapping
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
Spectral modelling represents an audio signal as the sum of a finite number of partials – sinusoids tracked through sequential analysis frames. With the goal of real-time user-controllable synthesis in mind, we assume these ...
Intelligent Audio Mixing Using Deep Learning
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
We propose a research trajectory in the field of deep learning applied to music production systems such as mixing, mastering, sound design and sound synthesis.
Automatic Detection of Metrical Structure Changes
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
Meter inference algorithms are typically designed to track metrical structure in presence of mild deviations of the feature estimates over time in order to account for performance imprecisions, expressive timing or musical ...
The Relationship Between Emotion and Music Production Quality
(Centre for Digital Music, Queen Mary University of London, 2016-12-20)
It is commonly known that music expresses emotion. In music production, the role of the mix engineer is to take a piece of recorded music and convey the emotions expressed as professionally sounding as possible. In this ...