Show simple item record

dc.contributor.authorYcart, A
dc.contributor.authorBenetos, E
dc.date.accessioned2020-04-29T09:01:37Z
dc.date.available2020-04-07
dc.date.available2020-04-29T09:01:37Z
dc.date.issued2020
dc.identifier.citationYcart, Adrien, and Emmanouil Benetos. "Learning And Evaluation Methodologies For Polyphonic Music Sequence Prediction With Lstms". IEEE/ACM Transactions On Audio, Speech, And Language Processing, 2020, pp. 1-1. Institute Of Electrical And Electronics Engineers (IEEE), doi:10.1109/taslp.2020.2987130. Accessed 29 Apr 2020.en_US
dc.identifier.issn2329-9304
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/63818
dc.description.abstractMusic language models (MLMs) play an important role for various music signal and symbolic music processing tasks, such as music generation, symbolic music classification, or automatic music transcription (AMT). In this paper, we investigate Long Short-Term Memory (LSTM) networks for polyphonic music prediction, in the form of binary piano rolls. A preliminary experiment, assessing the influence of the timestep of piano rolls on system performance, highlights the need for more musical evaluation metrics. We introduce a range of metrics, focusing on temporal and harmonic aspects. We propose to combine them into a parametrisable loss to train our network. We then conduct a range of experiments with this new loss, both for polyphonic music prediction (intrinsic evaluation) and using our predictive model as a language model for AMT (extrinsic evaluation). Intrinsic evaluation shows that tuning the behaviour of a model is possible by adjusting loss parameters, with consistent results across timesteps. Extrinsic evaluation shows consistent behaviour across timesteps in terms of precision and recall with respect to the loss parameters, leading to an improvement in AMT performance without changing the complexity of the model. In particular, we show that intrinsic performance (in terms of cross entropy) is not related to extrinsic performance, highlighting the importance of using custom training losses for each specific application. Our model also compares favourably with previously proposed MLMs.en_US
dc.format.extent? - ? (14)
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.ispartofIEEE/ACM Transactions on Audio, Speech and Language Processing
dc.titleLearning and Evaluation Methodologies for Polyphonic Music Sequence Prediction with LSTMsen_US
dc.typeArticleen_US
dc.rights.holder© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2020-04-07
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US
rioxxterms.funder.project483cf8e1-88a1-4b8b-aecb-8402672d45f8en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record