Show simple item record

dc.contributor.authorOlowe, Ireti
dc.date.accessioned2021-04-06T13:52:43Z
dc.date.available2021-04-06T13:52:43Z
dc.date.issued2020-12-15
dc.identifier.citationOlowe, Ireti. 2020. Bountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance. Queen Mary University of London.en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/71073
dc.descriptionPhD Thesis.en_US
dc.description.abstractArtists and researchers have long theorized and evaluated connections between sound and image in the context of musical performance. In our investigation, we introduce music information retrieval (MIR) techniques to the practice of live sound visualization and utilize computer-generated graphics to create aesthetic representations of music signals in real time. This thesis presents research that assesses design requirements for live audiovisual practice and evaluates different sound and image interaction systems. We propose a visualization method based on automated music analysis and multitrack audio to provide fine controls for audio-to-visual mapping and to support creative practice. We adopted a user-centered design approach informed by a meta-analysis of user studies exploring contemporary methods of live visual performance. We then conducted online surveys collecting general and specialist knowledge about audiovisual practices, multitrack audio, and audio feature extraction from over 50 practitioners. We performed research through design (RtD) and developed four audiovisual artifacts to test different audiovisual paradigms according to user interaction with audio data, mapping strategies, expression, and affordances. This helped us identify features and limitations of audiovisual models for live performance. Our final prototype (FEATUR.UX.AV) enables users to compose live visuals driven by audio features extracted on multiple instrumental audio stems. We conducted an experiment with 22 audiovisual performers to assess visualization methods in different audio input (multitrack, final mix) and audio feature (raw audio, content-based audio features) conditions. We used Human Computer Interaction (HCI) frameworks to assess usability, hedonic experience, preference, and value as a creativity support tool. In addition to established frameworks, we used qualitative methods to analyze reflective feedback from open answer questions related to aspects of user experience. This evaluation helped us gain insight into the nuances of user experience and highlight advantages and drawbacks of multitrack audio and audio content-analysis for live audiovisual practice.en_US
dc.language.isoenen_US
dc.publisherQueen Mary University of Londonen_US
dc.titleBountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance.en_US
dc.typeThesisen_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4235]
    Theses Awarded by Queen Mary University of London

Show simple item record