dc.contributor.author | CHOI, K | en_US |
dc.contributor.author | sandler, M | en_US |
dc.contributor.author | fazekas, G | en_US |
dc.contributor.author | Conference on Computer Simulation of Musical Creativity | en_US |
dc.date.accessioned | 2016-05-26T13:05:46Z | |
dc.date.available | 2016-04-15 | en_US |
dc.date.issued | 2016-06-18 | en_US |
dc.date.submitted | 2016-04-27T17:15:18.771Z | |
dc.identifier.uri | http://qmro.qmul.ac.uk/xmlui/handle/123456789/12552 | |
dc.description.abstract | In this paper, we introduce new methods and discuss results of text-based LSTM (Long Short-Term Memory) networks for automatic music composition. The proposed network is designed to learn relationships within text documents that represent chord progressions and drum tracks in two case studies. In the experiments, word-RNNs (Recurrent Neural Networks) show good results for both cases, while character-based RNNs (char-RNNs) only succeed to learn chord progressions. The proposed system can be used for fully automatic composition or as semi-automatic systems that help humans to compose music by controlling a diversity parameter of the model. | en_US |
dc.rights | arXiv record http://arxiv.org/abs/1604.05358. Presented at Conference on Computer Simulation of Musical Creativity | |
dc.subject | automatic composition | en_US |
dc.subject | lstm | en_US |
dc.subject | rnn | en_US |
dc.subject | algorithmic composition | en_US |
dc.title | Text-based LSTM networks for Automatic Music Composition | en_US |
dc.type | Conference Proceeding | |
pubs.notes | No embargo | en_US |
pubs.publication-status | Accepted | en_US |
dcterms.dateAccepted | 2016-04-15 | en_US |
qmul.funder | Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption::Engineering and Physical Sciences Research Council | en_US |