Diff-MST: Differentiable Mixing Style Transfer
dc.contributor.author | Vanka, S | |
dc.contributor.author | Steinmetz, C | |
dc.contributor.author | Rolland, J-B | |
dc.contributor.author | Reiss, J | |
dc.contributor.author | Fazekas, G | |
dc.contributor.author | International Society of Music Information Retrieval | |
dc.date.accessioned | 2024-07-16T08:15:32Z | |
dc.date.available | 2024-06-28 | |
dc.date.available | 2024-07-16T08:15:32Z | |
dc.date.issued | 2024-11-10 | |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/98161 | |
dc.description.abstract | Mixing style transfer automates the generation of a multitrack mix for a given set of tracks by inferring production attributes from a reference song. However, existing systems for mixing style transfer are limited in that they often operate only on a fixed number of tracks, introduce artifacts, and produce mixes in an end-to-end fashion, without grounding in traditional audio effects, prohibiting interpretability and controllability. To overcome these challenges, we introduce \textbf{Diff-MST}, a framework comprising a differentiable mixing console, a transformer controller, and an audio production style loss function. By inputting raw tracks and a reference song, our model estimates control parameters for audio effects within a differentiable mixing console, producing high-quality mixes and enabling post-hoc adjustments. Moreover, our architecture supports an arbitrary number of input tracks without source labelling, enabling real-world applications. We evaluate our model's performance against robust baselines and showcase the effectiveness of our approach, architectural design, tailored audio production style loss, and innovative training methodology for the given task. | en_US |
dc.publisher | ISMIR | en_US |
dc.subject | DDSP | en_US |
dc.subject | Automatic Mixing | en_US |
dc.subject | Music Production | en_US |
dc.subject | Audio Engineering | en_US |
dc.title | Diff-MST: Differentiable Mixing Style Transfer | en_US |
dc.type | Conference Proceeding | en_US |
pubs.author-url | http://sai-soum.github.io/ | en_US |
pubs.notes | Not known | en_US |
pubs.publication-status | Accepted | en_US |
pubs.publisher-url | https://www.ismir.net/ | en_US |
dcterms.dateAccepted | 2024-06-28 | |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
qmul.funder | UKRI Centre for Doctoral Training in Artificial Intelligence and Music::Engineering and Physical Sciences Research Council | en_US |
rioxxterms.funder.project | b215eee3-195d-4c4f-a85d-169a4331c138 | en_US |