dc.contributor.author | OLOWE, I | en_US |
dc.contributor.author | BARTHET, M | en_US |
dc.contributor.author | GRIERSON, M | en_US |
dc.contributor.author | BRYAN-KINNS, N | en_US |
dc.contributor.author | International Conference on Technologies for Music Notation and Representation (TENOR) | en_US |
dc.date.accessioned | 2016-09-21T11:20:02Z | |
dc.date.available | 2016-02-04 | en_US |
dc.date.submitted | 2016-04-29T11:10:17.999Z | |
dc.identifier.uri | http://qmro.qmul.ac.uk/xmlui/handle/123456789/15516 | |
dc.description.abstract | FEATUR.UX (Feature - ous) is an audio visualisation tool, currently in the process of development, which proposes to introduce a new approach to sound visualisation using pre-mixed, independent multitracks and audio feature ex- traction. Sound visualisation is usually performed using a mixed mono or stereo track of audio. Audio feature ex- traction is commonly used in the field of music information retrieval to create search and recommendation systems for large music databases rather than generating live visual- isations. Visualizing multitrack audio circumvents prob- lems related to the source separation of mixed audio sig- nals and presents an opportunity to examine interdepen- dent relationships within and between separate streams of music. This novel approach to sound visualisation aims to provide an enhanced listening experience in a use case that employs non-tonal, non-notated forms of electronic music. Findings from prior research studies focused on live per- formance and preliminary quantitative results from a user survey have provided the basis from which to develop a prototype for an iterative design study that examines the impact of using multitrack audio and audio feature extrac- tion within sound visualisation practice. | en_US |
dc.language.iso | en | en_US |
dc.rights | CC-BY | |
dc.subject | audio features | en_US |
dc.subject | sound visualization | en_US |
dc.subject | gui | en_US |
dc.subject | multitrack audio | en_US |
dc.title | FEATUR.UX: An approach to leveraging multitrack information for artistic music visualization | en_US |
dc.type | Conference Proceeding | |
dc.rights.holder | (c) 2016 Ireti Olowe et al | |
pubs.notes | Not known | en_US |
pubs.publication-status | Published | en_US |
dcterms.dateAccepted | 2016-02-04 | en_US |
qmul.funder | Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption::Engineering and Physical Sciences Research Council | en_US |