dc.contributor.author | Moro, G | |
dc.contributor.author | Mcpherson, A | |
dc.contributor.author | International Conference on New Interfaces for Musical Expression | |
dc.date.accessioned | 2020-05-19T08:31:21Z | |
dc.date.available | 2020-04-02 | |
dc.date.available | 2020-05-19T08:31:21Z | |
dc.date.issued | 2020-07-20 | |
dc.identifier.issn | 2220-4806 | |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/64180 | |
dc.description.abstract | On several acoustic and electromechanical keyboard instruments, the produced sound is not always strictly dependent exclusively on a discrete key velocity parameter, and minute gesture details can affect the final sonic result. By contrast, subtle variations in articulation have a relatively limited effect on the sound generation when the keyboard controller uses the MIDI standard, used in the vast majority of digital keyboards. In this paper we present an embedded platform that can generate sound in response to a controller capable of sensing the continuous position of keys on a keyboard. This platform enables the creation of keyboard-based DMIs which allow for a richer set of interaction gestures than would be possible through a MIDI keyboard, which we demonstrate through two example instruments. First, in a Hammond organ emulator, the sensing device allows to recreate the nuances of the interaction with the original instrument in a way a velocity-based MIDI controller could not. Second, a nonlinear waveguide flute synthesizer is shown as an example of the expressive capabilities that a continuous-keyboard controller opens up in the creation of new keyboard-based DMIs. | en_US |
dc.publisher | International Conference on New Interfaces for Musical Expression | en_US |
dc.rights | Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s). | |
dc.rights | Attribution 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by/3.0/us/ | * |
dc.title | A platform for low-latency continuous keyboard sensing and sound generation | en_US |
dc.type | Conference Proceeding | en_US |
dc.rights.holder | © 2020 The Authors. | |
pubs.notes | Not known | en_US |
pubs.publication-status | Accepted | en_US |
dcterms.dateAccepted | 2020-04-02 | |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
qmul.funder | Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption::Engineering and Physical Sciences Research Council | en_US |