• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    An End-to-End Neural Network for Polyphonic Music Transcription 
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Centre for Digital Music (C4DM)
    • An End-to-End Neural Network for Polyphonic Music Transcription
    •   QMRO Home
    • School of Electronic Engineering and Computer Science
    • Centre for Digital Music (C4DM)
    • An End-to-End Neural Network for Polyphonic Music Transcription
    ‌
    ‌

    Browse

    All of QMROCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
    ‌
    ‌

    Administrators only

    Login
    ‌
    ‌

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    An End-to-End Neural Network for Polyphonic Music Transcription

    View/Open
    arxiv file (662.6Kb)
    Publisher
    arxiv
    Metadata
    Show full item record
    Abstract
    We present a neural network model for polyphonic music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an acoustic model and a music language mode}. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony or the number or type of instruments. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We investigate various neural network architectures for the acoustic models and compare their performance to two popular state-of-the-art acoustic models. We also present an efficient variant of beam search that improves performance and reduces run-times by an order of magnitude, making the model suitable for real-time applications. We evaluate the model's performance on the MAPS dataset and show that the proposed model outperforms state-of-the-art transcription systems.
    Authors
    Sigtia, S; Benetos, E; Dixon, S
    URI
    http://qmro.qmul.ac.uk/xmlui/handle/123456789/9490
    Collections
    • Centre for Digital Music (C4DM) [210]
    Twitter iconFollow QMUL on Twitter
    Twitter iconFollow QM Research
    Online on twitter
    Facebook iconLike us on Facebook
    • Site Map
    • Privacy and cookies
    • Disclaimer
    • Accessibility
    • Contacts
    • Intranet
    • Current students

    Modern Slavery Statement

    Queen Mary University of London
    Mile End Road
    London E1 4NS
    Tel: +44 (0)20 7882 5555

    © Queen Mary University of London.