Show simple item record

dc.contributor.authorChoi, W
dc.contributor.authorJeong, YS
dc.contributor.authorKim, J
dc.contributor.authorChung, J
dc.contributor.authorJung, S
dc.contributor.authorReiss, JD
dc.date.accessioned2024-07-11T10:44:24Z
dc.date.available2024-07-11T10:44:24Z
dc.date.issued2022-09-01
dc.identifier.issn1549-4950
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/98000
dc.description.abstractLabel-conditioned source separation extracts the target source, specified by an input symbol, from an input mixture track. A recently proposed label-conditioned source separation model called Latent Source Attentive Frequency Transformation (LaSAFT)–Gated Point-Wise Convolutional Modulation (GPoCM)–Net introduced a block for latent source analysis called LaSAFT. Employing LaSAFT blocks, it established state-of-the-art performance on several tasks of the MUSDB18 benchmark. This paper enhances the LaSAFT block by exploiting a self-conditioning method. Whereas the existing method only cares about the symbolic relationships between the target source symbol and latent sources, ignoring audio content, the new approach also considers audio content. The enhanced block computes the attention mask conditioning on the label and the input audio feature map. Here, it is shown that the conditioned U-Net employing the enhanced LaSAFT blocks outperforms the previous model. It is also shown that the present model performs the audio-query–based separation with a slight modification.en_US
dc.format.extent661 - 673
dc.relation.ispartofAES: Journal of the Audio Engineering Society
dc.titleConditioned Source Separation by Attentively Aggregating Frequency Transformations With Self-Conditioningen_US
dc.typeArticleen_US
dc.identifier.doi10.17743/jaes.2022.0030
pubs.issue9en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume70en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record