dc.contributor.author | Wang, C | en_US |
dc.contributor.author | Benetos, E | en_US |
dc.contributor.author | Lostanlen, V | en_US |
dc.contributor.author | Chew, E | en_US |
dc.contributor.author | International Society for Music Information Retrieval Conference | en_US |
dc.date.accessioned | 2019-08-16T13:16:10Z | |
dc.date.available | 2019-06-07 | en_US |
dc.date.issued | 2019-11-04 | en_US |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/59179 | |
dc.description.abstract | Vibratos, tremolos, trills, and flutter-tongue are techniques frequently found in vocal and instrumental music. A common feature of these techniques is the periodic modulation in the time--frequency domain. We propose a representation based on time--frequency scattering to model the inter-class variability for fine discrimination of these periodic modulations. Time--frequency scattering is an instance of the scattering transform, an approach for building invariant, stable, and informative signal representations. The proposed representation is calculated around the wavelet subband of maximal acoustic energy, rather than over all the wavelet bands. To demonstrate the feasibility of this approach, we build a system that computes the representation as input to a machine learning classifier. Whereas previously published datasets for playing technique analysis focus primarily on techniques recorded in isolation, for ecological validity, we create a new dataset to evaluate the system. The dataset, named CBF-periDB, contains full-length expert performances on the Chinese bamboo flute that have been thoroughly annotated by the players themselves. We report F-measures of 99% for flutter-tongue, 82% for trill, 69% for vibrato, and 51% for tremolo detection, and provide explanatory visualisations of scattering coefficients for each of these techniques. | en_US |
dc.format.extent | 809 - 815 | en_US |
dc.rights | This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0). | |
dc.rights | Attribution 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by/3.0/us/ | * |
dc.title | Adaptive Time–Frequency Scattering for Periodic Modulation Recognition in Music Signals | en_US |
dc.type | Conference Proceeding | |
dc.rights.holder | © The Author(s) 2019 | |
pubs.notes | Not known | en_US |
pubs.publication-status | Accepted | en_US |
dcterms.dateAccepted | 2019-06-07 | en_US |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
qmul.funder | A Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineering | en_US |