• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    Bountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance. 
    •   QMRO Home
    • Queen Mary University of London Theses
    • Theses
    • Bountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance.
    •   QMRO Home
    • Queen Mary University of London Theses
    • Theses
    • Bountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance.
    ‌
    ‌

    Browse

    All of QMROCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
    ‌
    ‌

    Administrators only

    Login
    ‌
    ‌

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Bountiful Data: Leveraging Multitrack Audio and Content-Based for Audiovisual Performance.

    View/Open
    PhD Thesis (36.57Mb)
    Publisher
    Queen Mary University of London
    Metadata
    Show full item record
    Abstract
    Artists and researchers have long theorized and evaluated connections between sound and image in the context of musical performance. In our investigation, we introduce music information retrieval (MIR) techniques to the practice of live sound visualization and utilize computer-generated graphics to create aesthetic representations of music signals in real time. This thesis presents research that assesses design requirements for live audiovisual practice and evaluates different sound and image interaction systems. We propose a visualization method based on automated music analysis and multitrack audio to provide fine controls for audio-to-visual mapping and to support creative practice. We adopted a user-centered design approach informed by a meta-analysis of user studies exploring contemporary methods of live visual performance. We then conducted online surveys collecting general and specialist knowledge about audiovisual practices, multitrack audio, and audio feature extraction from over 50 practitioners. We performed research through design (RtD) and developed four audiovisual artifacts to test different audiovisual paradigms according to user interaction with audio data, mapping strategies, expression, and affordances. This helped us identify features and limitations of audiovisual models for live performance. Our final prototype (FEATUR.UX.AV) enables users to compose live visuals driven by audio features extracted on multiple instrumental audio stems. We conducted an experiment with 22 audiovisual performers to assess visualization methods in different audio input (multitrack, final mix) and audio feature (raw audio, content-based audio features) conditions. We used Human Computer Interaction (HCI) frameworks to assess usability, hedonic experience, preference, and value as a creativity support tool. In addition to established frameworks, we used qualitative methods to analyze reflective feedback from open answer questions related to aspects of user experience. This evaluation helped us gain insight into the nuances of user experience and highlight advantages and drawbacks of multitrack audio and audio content-analysis for live audiovisual practice.
    Authors
    Olowe, Ireti
    URI
    https://qmro.qmul.ac.uk/xmlui/handle/123456789/71073
    Collections
    • Theses [3702]
    Twitter iconFollow QMUL on Twitter
    Twitter iconFollow QM Research
    Online on twitter
    Facebook iconLike us on Facebook
    • Site Map
    • Privacy and cookies
    • Disclaimer
    • Accessibility
    • Contacts
    • Intranet
    • Current students

    Modern Slavery Statement

    Queen Mary University of London
    Mile End Road
    London E1 4NS
    Tel: +44 (0)20 7882 5555

    © Queen Mary University of London.