Recent Submissions

  • An extensible cluster-graph taxonomy for open set sound scene analysis 

    BEAR, H; BENETOS, E; Workshop on Detection and Classification of Acoustic Scenes and Events (Workshop on Detection and Classification of Acoustic Scenes and Events, 2018)
    We present a new extensible and divisible taxonomy for open set sound scene analysis. This new model allows complex scene analysis with tangible descriptors and perception labels. Its novel structure is a cluster graph ...
  • Towards HMM-based glissando detection for recordings of Chinese bamboo flute 

    WANG, C; BENETOS, E; Meng, X; CHEW, E; International Society for Music Information Retrieval Conference Late-Breaking Demos Session (International Society for Music Information Retrieval Conference, 2018)
    Playing techniques such as ornamentations and articulation effects constitute important aspects of music performance. However, their computational analysis is still under-explored due to a lack of data and established ...
  • Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets 

    Morfi, V; Stowell, D (MDPI, 2018-08-18)
    In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal ...
  • Acoustic event detection for multiple overlapping similar sources 

    Stowell, D; Clayton, D (IEEE, 2015-10)
    Many current paradigms for acoustic event detection (AED) are not adapted to the organic variability of natural sounds, and/or they assume a limit on the number of simultaneous sources: often only one source, or one source ...
  • Towards Complete Polyphonic Music Transcription: Integrating Multi-Pitch Detection and Rhythm Quantization 

    Nakamura, E; BENETOS, E; Yoshii, K; DIXON, S; IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2018-04)
    Most work on automatic transcription produces "piano roll" data with no musical interpretation of the rhythm or pitches. We present a polyphonic transcription method that converts a music audio signal into a human-readable ...
Return to top