Show simple item record

dc.contributor.authorMcPherson, APen_US
dc.contributor.authorJack, RHen_US
dc.contributor.authorMoro, Gen_US
dc.contributor.authorProceedings of the International Conference on New Interfaces for Musical Expression, Brisbane, Queensland, Australia, July 11-15, 2016en_US
dc.date.accessioned2016-05-24T10:51:01Z
dc.date.available2016-03-30en_US
dc.date.issued2016-07-11en_US
dc.date.submitted2016-05-05T23:29:14.664Z
dc.identifier.urihttp://qmro.qmul.ac.uk/xmlui/handle/123456789/12479
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).en_US
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).en_US
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).en_US
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).en_US
dc.descriptionLicensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s).en_US
dc.description.abstractThe importance of low and consistent latency in interactive music systems is well-established. So how do commonly-used tools for creating digital musical instruments and other tangible interfaces perform in terms of latency from user action to sound output? This paper examines several common configurations where a microcontroller (e.g. Arduino) or wireless device communicates with computer-based sound generator (e.g. Max/MSP, Pd). We find that, perhaps surprisingly, almost none of the tested configurations meet generally-accepted guidelines for latency and jitter. To address this limitation, the paper presents a new embedded platform, Bela, which is capable of complex audio and sensor processing at submillisecond latency.en_US
dc.publisherGriffith Universityen_US
dc.rightsTo be published in https://nime2016.wordpress.com/
dc.subjectLatencyen_US
dc.subjectJittersen_US
dc.subjectTiming accuracyen_US
dc.subjectEmbodied interactionen_US
dc.subjectMusical interactionen_US
dc.subjectEmbedded hardwareen_US
dc.titleAction-Sound Latency: Are Our Tools Fast Enough?en_US
dc.typeConference Proceeding
pubs.notesNo embargoen_US
pubs.publication-statusAccepteden_US
pubs.publisher-urlhttps://nime2016.wordpress.com/en_US
dcterms.dateAccepted2016-03-30en_US
qmul.funderFusing Semantic and Audio Technologies for Intelligent Music Production and Consumption::Engineering and Physical Sciences Research Councilen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record