Show simple item record

dc.contributor.authorKollias, Den_US
dc.contributor.authorSharmanska, Ven_US
dc.contributor.authorZafeiriou, Sen_US
dc.contributor.author38th Annual AAAI Conference on Artificial Intelligenceen_US
dc.date.accessioned2024-02-23T10:48:41Z
dc.date.available2024-01-09en_US
dc.date.issued24-03-2024
dc.identifier.citationKollias, D., Sharmanska, V., & Zafeiriou, S. (2024). Distribution Matching for Multi-Task Learning of Classification Tasks: A Large-Scale Study on Faces & Beyond. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2813-2821. https://doi.org/10.1609/aaai.v38i3.28061
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/94857
dc.description.abstractMulti-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space, or parameter transfer. To provide sufficient learning support, modern MTL uses annotated data with full, or sufficiently large overlap across tasks, i.e., each input sample is annotated for all, or most of the tasks. However, collecting such annotations is prohibitive in many real applications, and cannot benefit from datasets available for individual tasks. In this work, we challenge this setup and show that MTL can be successful with classification tasks with little, or non-overlapping annotations, or when there is big discrepancy in the size of labeled data per task. We explore task-relatedness for co-annotation and co-training, and propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching. To demonstrate the general applicability of our method, we conducted diverse case studies in the domains of affective computing, face recognition, species recognition, and shopping item classification using nine datasets. Our large-scale study of affective tasks for basic expression recognition and facial action unit detection illustrates that our approach is network agnostic and brings large performance improvements compared to the state-of-the-art in both tasks and across all studied databases. In all case studies, we show that co-training via task-relatedness is advantageous and prevents negative transfer (which occurs when MT model's performance is worse than that of at least one single-task model).
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)
dc.relation.ispartofProceedings of the AAAI Conference on Artificial Intelligence
dc.rights© 2024, Association for the Advancement of Artificial Intelligence
dc.titleDistribution Matching for Multi-Task Learning of Classification Tasks: a Large-Scale Study on Faces & Beyonden_US
dc.typeConference Proceeding
dc.identifier.doidoi.org/10.1609/aaai.v38i3.28061
pubs.notesNot knownen_US
pubs.publication-statusAccepteden_US
dcterms.dateAccepted2024-01-09en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record