Show simple item record

dc.contributor.authorCheuk, KWen_US
dc.contributor.authorLuo, Y-Jen_US
dc.contributor.authorBenetos, Een_US
dc.contributor.authorHerremans, Den_US
dc.contributor.authorInternational Joint Conference on Neural Networks (IJCNN)en_US
dc.date.accessioned2021-05-25T15:22:39Z
dc.date.available2021-04-10en_US
dc.date.issued2021-07-18en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/72070
dc.description.abstractRecent advances in automatic music transcription (AMT) have achieved highly accurate polyphonic piano transcription results by incorporating onset and offset detection. The existing literature, however, focuses mainly on the leverage of deep and complex models to achieve state-of-the-art (SOTA) accuracy, without understanding model behaviour. In this paper, we conduct a comprehensive examination of the Onsets-and-Frames AMT model, and pinpoint the essential components contributing to a strong AMT performance. This is achieved through exploitation of a modified additive attention mechanism. The experimental results suggest that the attention mechanism beyond a moderate temporal context does not benefit the model, and that rule-based post-processing is largely responsible for the SOTA performance. We also demonstrate that the onsets are the most significant attentive feature regardless of model complexity. The findings encourage AMT research to weigh more on both a robust onset detector and an effective post-processor.en_US
dc.format.extent? - ? (8)en_US
dc.publisherIEEEen_US
dc.relation.replaces123456789/71866
dc.relation.replaceshttps://qmro.qmul.ac.uk/xmlui/handle/123456789/71866
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.titleRevisiting the onsets and frames model with additive attentionen_US
dc.typeConference Proceeding
pubs.merge-from123456789/71866
pubs.merge-fromhttps://qmro.qmul.ac.uk/xmlui/handle/123456789/71866
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
pubs.publisher-urlhttps://www.ijcnn.org/en_US
dcterms.dateAccepted2021-04-10en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Except where otherwise noted, this item's license is described as © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.