Now showing items 1-2 of 2

    • Robustness of Adversarial Attacks in Sound Event Classification 

      SUBRAMANIAN, V; Benetos, E; Sandler, M; 4th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2019) (2019-10-25)
      An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, ...
    • A Study on the Transferability of Adversarial Attacks in Sound Event Classification 

      SUBRAMANIAN, V; Pankajakshan, A; Benetos, E; Xu, N; McDonald, S; Sandler, M; IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020) (IEEE, 2020-05-04)
      An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. ...