Show simple item record

dc.contributor.authorPasini, Men_US
dc.contributor.authorGrachten, Men_US
dc.contributor.authorLattner, Sen_US
dc.contributor.authorICASSP 2024en_US
dc.date.accessioned2024-02-08T11:17:07Z
dc.date.available2023-12-13en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/94512
dc.description.abstractThe ability to automatically generate music that appropriately matches an arbitrary input track is a challenging task. We present a novel controllable system for generating single stems to accompany musical mixes of arbitrary length. At the core of our method are audio autoencoders that efficiently compress audio waveform samples into invertible latent representations, and a conditional latent diffusion model that takes as input the latent encoding of a mix and generates the latent encoding of a corresponding stem. To provide control over the timbre of generated samples, we introduce a technique to ground the latent space to a user-provided reference style during diffusion sampling. For further improving audio quality, we adapt classifier-free guidance to avoid distortions at high guidance strengths when generating an unbounded latent space. We train our model on a dataset of pairs of mixes and matching bass stems. Quantitative experiments demonstrate that, given an input mix, the proposed system can generate basslines with user-specified timbres. Our controllable conditional audio generation framework represents a significant step forward in creating generative AI tools to assist musicians in music production.en_US
dc.rightsThis is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
dc.subjectmusicen_US
dc.subjectaccompanimenten_US
dc.subjectdiffusionen_US
dc.subjectgenerationen_US
dc.subjectbassen_US
dc.titleBass Accompaniment Generation via Latent Diffusionen_US
dc.typeConference Proceeding
dc.identifier.doi10.48550/arXiv.2402.01412en_US
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2023-12-13en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderUKRI Centre for Doctoral Training in Artificial Intelligence and Music::Engineering and Physical Sciences Research Councilen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record