Kymatio: Deep Learning meets Wavelet Theory for Music Signal Processing
Metadata
Show full item recordAbstract
We present a tutorial on MIR with the open-source Kymatio (Andreux et al., 2020) toolkit for analysis and synthesis of music signals and timbre with differentiable computing. Kymatio is a Python package for applications at the intersection of deep learning and wavelet scattering. Its latest release (v0.4) provides an implementation of the joint time—frequency scattering transform (JTFS), which is an idealisation of a neurophysiological model that is commonly known in musical timbre perception research: the spectrotemporal receptive field (STRF) (Patil et al., 2012). In the MIR research, scattering transforms have demonstrated effectiveness in musical instrument classification (Vahidi et al., 2022), neural audio synthesis (Andreux et al., 2018), playing technique recognition and similarity (Lostanlen et al., 2021), acoustic modelling (Lostanlen et al., 2020), synthesizer parameter estimation and objective audio similarity (Vahidi et al., 2023, Lostanlen et al., 2023). The Kymatio ecosystem will be introduced with examples in MIR: - Wavelet transform and scattering introduction (including constant-Q transform, scattering transforms, joint time–frequency scattering transforms, and visualizations) - MIR with scattering: music classification - A perceptual distance objective for gradient descent Generative evaluation of audio representations (GEAR) (Lostanlen et al., 2023) A comprehensive overview of Kymatio’s frontend user interface will be given, with examples of extensibility of the core routines and filterbank construction. We ask our participants to have some prior knowledge in: - Python and NumPy programming (familiarity with Pytorch is a bonus, but not essential) - Spectrogram visualization - Computer-generated sounds No prior knowledge of wavelet or scattering transforms is expected.