Show simple item record

dc.contributor.authorMISHRA, Sen_US
dc.contributor.authorSTOLLER, Den_US
dc.contributor.authorBENETOS, Een_US
dc.contributor.authorSTURM, Ben_US
dc.contributor.authorDIXON, Sen_US
dc.contributor.authorSafeML ICLR 2019 Workshopen_US
dc.date.accessioned2019-04-30T10:54:19Z
dc.date.available2019-03-22en_US
dc.date.issued2019-05-06en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/57216
dc.description.abstractOne way to interpret trained deep neural networks (DNNs) is by inspecting characteristics that neurons in the model respond to, such as by iteratively optimising the model input (e.g., an image) to maximally activate specific neurons. However, this requires a careful selection of hyper-parameters to generate interpretable examples for each neuron of interest, and current methods rely on a manual, qualitative evaluation of each setting, which is prohibitively slow. We introduce a new metric that uses Fréchet Inception Distance (FID) to encourage similarity between model activations for real and generated data. This provides an efficient way to evaluate a set of generated examples for each setting of hyper-parameters. We also propose a novel GAN-based method for generating explanations that enables an efficient search through the input space and imposes a strong prior favouring realistic outputs. We apply our approach to a classification model trained to predict whether a music audio recording contains singing voice. Our results suggest that this proposed metric successfully selects hyper-parameters leading to interpretable examples, avoiding the need for manual evaluation. Moreover, we see that examples synthesised to maximise or minimise the predicted probability of singing voice presence exhibit vocal or non-vocal characteristics, respectively, suggesting that our approach is able to generate suitable explanations for understanding concepts learned by a neural network.en_US
dc.format.extent? - ? (1)en_US
dc.titleGAN-based Generation and Automatic Selection of Explanations for Neural Networksen_US
dc.typeConference Proceeding
dc.rights.holder© The Author(s) 2019
pubs.author-urlhttps://sites.google.com/site/saumitramishrac4dm/en_US
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2019-03-22en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record