Search
Now showing items 1-1 of 1
Robustness of Adversarial Attacks in Sound Event Classification
(2019-10-25)
An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, ...