Show simple item record

dc.contributor.authorBear, Hen_US
dc.contributor.authorNolasco, Ien_US
dc.contributor.authorBenetos, Een_US
dc.contributor.author20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019)en_US
dc.date.accessioned2019-07-12T10:00:06Z
dc.date.available2019-06-17en_US
dc.date.issued2019-09-15en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/58478
dc.description.abstractAcoustic Scene Classification (ASC) and Sound Event Detection (SED) are two separate tasks in the field of computational sound scene analysis. In this work, we present a new dataset with both sound scene and sound event labels and use this to demonstrate a novel method for jointly classifying sound scenes and recognizing sound events. We show that by taking a joint approach, learning is more efficient and whilst improvements are still needed for sound event detection, SED results are robust in a dataset where the sample distribution is skewed towards sound scenes.en_US
dc.format.extent4594 - 4598en_US
dc.publisherInternational Speech Communication Association (ISCA)en_US
dc.titleTowards joint sound scene and polyphonic sound event recognitionen_US
dc.typeConference Proceeding
dc.rights.holder© The Author(s) 2019
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
pubs.publisher-urlhttps://www.interspeech2019.org/en_US
dcterms.dateAccepted2019-06-17en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record