dc.contributor.author | Pasini, M | en_US |
dc.contributor.author | Grachten, M | en_US |
dc.contributor.author | Lattner, S | en_US |
dc.contributor.author | ICASSP 2024 | en_US |
dc.date.accessioned | 2024-02-08T11:17:07Z | |
dc.date.available | 2023-12-13 | en_US |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/94512 | |
dc.description.abstract | The ability to automatically generate music that appropriately matches an arbitrary input track is a challenging task. We present a novel controllable system for generating single stems to accompany musical mixes of arbitrary length. At the core of our method are audio autoencoders that efficiently compress audio waveform samples into invertible latent representations, and a conditional latent diffusion model that takes as input the latent encoding of a mix and generates the latent encoding of a corresponding stem. To provide control over the timbre of generated samples, we introduce a technique to ground the latent space to a user-provided reference style during diffusion sampling. For further improving audio quality, we adapt classifier-free guidance to avoid distortions at high guidance strengths when generating an unbounded latent space. We train our model on a dataset of pairs of mixes and matching bass stems. Quantitative experiments demonstrate that, given an input mix, the proposed system can generate basslines with user-specified timbres. Our controllable conditional audio generation framework represents a significant step forward in creating generative AI tools to assist musicians in music production. | en_US |
dc.rights | This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/. | |
dc.subject | music | en_US |
dc.subject | accompaniment | en_US |
dc.subject | diffusion | en_US |
dc.subject | generation | en_US |
dc.subject | bass | en_US |
dc.title | Bass Accompaniment Generation via Latent Diffusion | en_US |
dc.type | Conference Proceeding | |
dc.identifier.doi | 10.48550/arXiv.2402.01412 | en_US |
pubs.notes | Not known | en_US |
pubs.publication-status | Accepted | en_US |
dcterms.dateAccepted | 2023-12-13 | en_US |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
qmul.funder | UKRI Centre for Doctoral Training in Artificial Intelligence and Music::Engineering and Physical Sciences Research Council | en_US |