Show simple item record

dc.contributor.authorFANO YELA, Den_US
dc.contributor.authorewert, SEen_US
dc.contributor.authorsandler, MSen_US
dc.contributor.authorFitzGerald, DFen_US
dc.contributor.authorAudio Engineering Society Conference on Semantic Audioen_US
dc.date.accessioned2017-04-21T14:55:57Z
dc.date.available2017-03-22en_US
dc.date.submitted2017-04-12T13:59:40.196Z
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/22556
dc.description.abstractMusical source separation methods exploit source-specific spectral characteristics to facilitate the decomposition process. Kernel Additive Modelling (KAM) models a source applying robust statistics to time-frequency bins as specified by a source-specific kernel, a function defining similarity between bins. Kernels in existing approaches are typically defined using metrics between single time frames. In the presence of noise and other sound sources information from a single-frame, however, turns out to be unreliable and often incorrect frames are selected as similar. In this paper, we incorporate a temporal context into the kernel to provide additional information stabilizing the similarity search. Evaluated in the context of vocal separation, our simple extension led to a considerable improvement in separation quality compared to previous kernels.en_US
dc.rightshttps://arxiv.org/abs/1702.02130
dc.subjectSource Separationen_US
dc.subjectKernel Additive Modellingen_US
dc.titleOn the Importance of Temporal Context in Proximity Kernels: A Vocal Separation Case Studyen_US
dc.typeConference Proceeding
dc.rights.holder© AES
pubs.notesNo embargoen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2017-03-22en_US
qmul.funderFusing Semantic and Audio Technologies for Intelligent Music Production and Consumption::Engineering and Physical Sciences Research Councilen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record