Show simple item record

dc.contributor.authorPankajakshan, Aen_US
dc.contributor.authorBear, Hen_US
dc.contributor.authorBenetos, Een_US
dc.contributor.authorIEEE Workshop on Applications of Signal Processing to Audio and Acousticsen_US
dc.date.accessioned2019-08-13T10:52:16Z
dc.date.available2019-07-15en_US
dc.date.issued2019-10-20en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/59068
dc.description.abstractPolyphonic Sound Event Detection (SED) in real-world recordings is a challenging task because of the dynamic polyphony level, intensity, and duration of sound events. Current polyphonic SED systems fail to model the temporal structure of sound events explicitly and instead attempt to look at which sound events are present at each audio frame. Consequently, the event-wise detection performance is much lower than the segment-wise detection performance. In this work, we propose a joint model approach to improve the temporal localization of sound events using a multi-task learning setup. The first task predicts which sound events are present at each time frame; we call this branch 'Sound Event Detection (SED) model', while the second task predicts if a sound event is present or not at each frame; we call this branch 'Sound Activity Detection (SAD) model'. We verify the proposed joint model by comparing it with a separate implementation of both tasks aggregated together from individual task predictions. Our experiments on the URBAN-SED dataset show that the proposed joint model can alleviate False Positive (FP) and False Negative (FN) errors and improve both the segment-wise and the event-wise metrics.en_US
dc.format.extent318 - 322en_US
dc.publisherIEEEen_US
dc.titlePolyphonic sound event and sound activity detection: a multi-task approachen_US
dc.typeConference Proceeding
dc.rights.holder© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
pubs.author-urlhttp://www.eecs.qmul.ac.uk/profiles/pankajakshanarjun.htmlen_US
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2019-07-15en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US
qmul.funderA Machine Learning Framework for Audio Analysis and Retrieval::Royal Academy of Engineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record