Show simple item record

dc.contributor.authorWang, Cen_US
dc.contributor.authorBenetos, Een_US
dc.contributor.authorLostanlen, Ven_US
dc.contributor.authorChew, Een_US
dc.contributor.authorInternational Society for Music Information Retrieval Conferenceen_US
dc.date.accessioned2019-08-16T13:16:10Z
dc.date.available2019-06-07en_US
dc.date.issued2019-11-04en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/59179
dc.description.abstractVibratos, tremolos, trills, and flutter-tongue are techniques frequently found in vocal and instrumental music. A common feature of these techniques is the periodic modulation in the time--frequency domain. We propose a representation based on time--frequency scattering to model the inter-class variability for fine discrimination of these periodic modulations. Time--frequency scattering is an instance of the scattering transform, an approach for building invariant, stable, and informative signal representations. The proposed representation is calculated around the wavelet subband of maximal acoustic energy, rather than over all the wavelet bands. To demonstrate the feasibility of this approach, we build a system that computes the representation as input to a machine learning classifier. Whereas previously published datasets for playing technique analysis focus primarily on techniques recorded in isolation, for ecological validity, we create a new dataset to evaluate the system. The dataset, named CBF-periDB, contains full-length expert performances on the Chinese bamboo flute that have been thoroughly annotated by the players themselves. We report F-measures of 99% for flutter-tongue, 82% for trill, 69% for vibrato, and 51% for tremolo detection, and provide explanatory visualisations of scattering coefficients for each of these techniques.en_US
dc.format.extent809 - 815en_US
dc.rightsThis is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.titleAdaptive Time–Frequency Scattering for Periodic Modulation Recognition in Music Signalsen_US
dc.typeConference Proceeding
dc.rights.holder© The Author(s) 2019
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2019-06-07en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
Except where otherwise noted, this item's license is described as This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).