Show simple item record

dc.contributor.authorSigtia, S
dc.contributor.authorBenetos, E
dc.contributor.authorDixon, S
dc.date.accessioned2015-12-01T14:57:53Z
dc.date.available2015-12-01T14:57:53Z
dc.date.issued2015-08
dc.date.submitted2015-11-18T12:45:42.176Z
dc.identifier.citationSigtia, Siddharth, Emmanouil Benetos, and Simon Dixon, 'An End-To-End Neural Network For Polyphonic Music Transcription', Arxiv.org, 2015 <http://arxiv.org/abs/1508.01774> [accessed 1 December 2015]en_US
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/9490
dc.description.abstractWe present a neural network model for polyphonic music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an acoustic model and a music language mode}. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony or the number or type of instruments. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We investigate various neural network architectures for the acoustic models and compare their performance to two popular state-of-the-art acoustic models. We also present an efficient variant of beam search that improves performance and reduces run-times by an order of magnitude, making the model suitable for real-time applications. We evaluate the model's performance on the MAPS dataset and show that the proposed model outperforms state-of-the-art transcription systems.en_US
dc.language.isoenen_US
dc.publisherarxiven_US
dc.relation.isreplacedby123456789/17623
dc.relation.isreplacedbyhttp://qmro.qmul.ac.uk/xmlui/handle/123456789/17623
dc.relation.isreplacedbyhttps://qmro.qmul.ac.uk/handle/123456789/17623
dc.relation.isreplacedbyhttps://qmro.qmul.ac.uk/xmlui/handle/123456789/17623
dc.subjectstat.MLen_US
dc.subjectstat.MLen_US
dc.subjectcs.LGen_US
dc.subjectcs.SDen_US
dc.titleAn End-to-End Neural Network for Polyphonic Music Transcriptionen_US
dc.typeArticleen_US
pubs.author-urlhttp://arxiv.org/abs/1508.01774v1
pubs.declined2015-11-19T10:11:36.601+0000
pubs.deleted2015-11-19T10:11:36.601+0000
pubs.merge-to123456789/17623
pubs.merge-tohttp://qmro.qmul.ac.uk/xmlui/handle/123456789/17623
pubs.merge-tohttps://qmro.qmul.ac.uk/handle/123456789/17623
pubs.merge-tohttps://qmro.qmul.ac.uk/xmlui/handle/123456789/17623


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record