• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    Joint multi-pitch detection and score transcription for polyphonic piano music 
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Electronic Engineering and Computer Science
    • Joint multi-pitch detection and score transcription for polyphonic piano music
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Electronic Engineering and Computer Science
    • Joint multi-pitch detection and score transcription for polyphonic piano music
    ‌
    ‌

    Browse

    All of QMROCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
    ‌
    ‌

    Administrators only

    Login
    ‌
    ‌

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Joint multi-pitch detection and score transcription for polyphonic piano music

    View/Open
    Accepted version
    Embargoed until: 2099-01-01
    Pagination
    ? - ? (5)
    Publisher
    IEEE
    Publisher URL
    https://2021.ieeeicassp.org/
    Metadata
    Show full item record
    Abstract
    Research on automatic music transcription has largely focused on multi-pitch detection; there is limited discussion on how to obtain a machine- or human-readable score transcription. In this paper, we propose a method for joint multi-pitch detection and score transcription for polyphonic piano music. The outputs of our system include both a piano-roll representation (a descriptive transcription) and a symbolic musical notation (a prescriptive transcription). Unlike traditional methods that further convert MIDI transcriptions into musical scores, we use a multitask model combined with a Convolutional Recurrent Neural Network and Sequence-to-sequence models with attention mechanisms. We propose a Reshaped score representation that outperforms a LilyPond representation in terms of both prediction accuracy and time/memory resources, and compare different input audio spectrograms. We also create a new synthesized dataset for score transcription research. Experimental results show that the joint model outperforms a single-task model in score transcription.
    Authors
    Liu, L; Morfi, G-V; Benetos, E; IEEE International Conference on Acoustics, Speech and Signal Processing
    URI
    https://qmro.qmul.ac.uk/xmlui/handle/123456789/70432
    Collections
    • Electronic Engineering and Computer Science [1833]
    Copyright statements
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
    Twitter iconFollow QMUL on Twitter
    Twitter iconFollow QM Research
    Online on twitter
    Facebook iconLike us on Facebook
    • Site Map
    • Privacy and cookies
    • Disclaimer
    • Accessibility
    • Contacts
    • Intranet
    • Current students

    Modern Slavery Statement

    Queen Mary University of London
    Mile End Road
    London E1 4NS
    Tel: +44 (0)20 7882 5555

    © Queen Mary University of London.