Now showing items 1-4 of 4

    • Large-Scale Pretrained Model for Self-Supervised Music Audio Representation Learning 

      Li, Y; Yuan, R; Zhang, G; Ma, Y; Lin, C; Chen, X; Ragni, A; Yin, H; Hu, Z; He, H (2022-12-20)
      Self-supervised learning technique is an under-explored topic for music audio due to the challenge of designing an appropriate training paradigm. We hence propose MAP-MERT, a large-scale music audio pre-trained model for ...
    • MARBLE: Music Audio Representation Benchmark for Universal Evaluation 

      Yuan, R; Ma, Y; Li, Y; Zhang, G; Chen, X; Yin, H; Zhuo, L; Liu, Y; Huang, J; Tian, Z (37th Conference on Neural Information Processing Systems (NeurIPS), 2023)
      In the era of extensive intersection between art and Artificial Intelligence (AI), such as image generation and fiction co-creation, AI for music remains relatively nascent, particularly in music understanding. This is ...
    • MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training 

      Li, Y; Yuan, R; Zhang, G; Ma, Y; Chen, X; Yin, H; Xiao, C; Lin, C; Ragni, A; Benetos, E (2024-05-07)
      Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech ...
    • On the effectiveness of speech self-supervised learning for music 

      Ma, Y; Yuan, R; Li, Y; Zhang, G; Chen, X; Yin, H; Lin, C; Benetos, E; Ragni, A; Gyenge, N (International Society for Music Information Retrieval Conference (ISMIR), 2023-11-05)
      Self-supervised learning (SSL) has shown promising results in various speech and natural language processing applications. However, its efficacy in music information retrieval (MIR) still remains largely unexplored. While ...