Show simple item record

dc.contributor.authorGuinot, J
dc.contributor.authorFazekas, G
dc.contributor.authorQuinton, E
dc.contributor.authorThe 25th International Society for Music Information Retrieval Conference
dc.date.accessioned2024-10-31T16:43:34Z
dc.date.available2024-06-28
dc.date.available2024-10-31T16:43:34Z
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/101165
dc.description.abstractDespite the success of contrastive learning in Music Information Retrieval, the inherent ambiguity of contrastive self-supervision presents a challenge. Relying solely on augmentation chains and self-supervised positive sampling strategies may lead to a pretraining objective that does not capture key musical information for downstream tasks. We introduce semi-supervised contrastive learning (SemiSupCon), an architecturally simple method for leveraging musically informed supervision signals in the contrastive learning of musical representations. Our approach introduces musically-relevant supervision signals into self-supervised contrastive learning by combining supervised and self-supervised contrastive objectives in a simple framework compared to previous work. This framework improves downstream performance and robustness to audio corruptions on a range of downstream MIR tasks with moderate amounts of labeled data. Our approach enables shaping the learned similarity metric through the choice of labeled data which (1) infuses the representations with musical domain knowledge and (2) improves out-of-domain performance with minimal general downstream performance loss. We show strong transfer learning performance on musically related yet not trivially similar tasks - such as pitch and key estimation. Additionally, our approach shows performance improvement on automatic tagging over self-supervised approaches with only 5% of available labels included in pretraining.en_US
dc.rightsLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Julien Guinot, Elio Quinton, György Fazekas, “Semisupervised Contrastive Learning of Musical Representations”, in Proc. of the 25th Int. Society for Music Information Retrieval Conf., San Francisco, United States, 2024.
dc.subjectMusic Information Retrievalen_US
dc.subjectContrastive Learningen_US
dc.subjectSemi-Supervised Learningen_US
dc.subjectRepresentation Learningen_US
dc.titleProceedings of the 25th International Society for Music Information Retrieval Conferenceen_US
dc.typeConference Proceedingen_US
dc.rights.holder© Julien Guinot, Elio Quinton, György Fazekas.
pubs.author-urlhttp://julienguinot.com/en_US
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2024-06-28
rioxxterms.funderEngineering and Physical Sciences Research Council
rioxxterms.identifier.projectUKRI Centre for Doctoral Training in Artificial Intelligence and Music
qmul.funderUKRI Centre for Doctoral Training in Artificial Intelligence and Music::Engineering and Physical Sciences Research Councilen_US
rioxxterms.funder.projectcde088de-c928-4c53-9f67-ab5d3484b48a


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record