Show simple item record

dc.contributor.authorDehban, Aen_US
dc.contributor.authorZhang, Sen_US
dc.contributor.authorCauli, Nen_US
dc.contributor.authorJamone, Len_US
dc.contributor.authorSantos-Victor, Jen_US
dc.date.accessioned2022-03-11T13:31:02Z
dc.date.issued2022-02-17en_US
dc.identifier.issn2379-8920en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/77278
dc.description.abstractIn order to effectively handle multiple tasks that are not pre-defined, a robotic agent needs to automatically map its high-dimensional sensory inputs into useful features. As a solution, feature learning has empirically shown substantial improvements in obtaining representations that are generalizable to different tasks, compared to feature engineering approaches, but it requires a large amount of data and computational capacity. These challenges are specifically relevant in robotics due to the low signal-to-noise ratios inherent to robotic data, and to the cost typically associated with collecting this type of input. In this paper, we propose a deep probabilistic method based on Convolutional Variational Auto-Encoders (CVAEs) to learn visual features suitable for interaction and recognition tasks. We run our experiments on a self-supervised robotic sensorimotor dataset. Our data was acquired with the iCub humanoid and is based on a standard object collection, thus being readily extensible. We evaluated the learned features in terms of usability for 1) object recognition, 2) capturing the statistics of the effects, and 3) planning. In addition, where applicable, we compared the performance of the proposed architecture with other state-ofthe-art models. These experiments demonstrate that our model is capable of capturing the functional statistics of action and perception (i.e. images) which performs better than existing baselines, without requiring millions of samples or any handengineered features.en_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.ispartofIEEE Transactions on Cognitive and Developmental Systemsen_US
dc.titleLearning Deep Features for Robotic Inference from Physical Interactionsen_US
dc.typeArticle
dc.rights.holder© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.identifier.doi10.1109/TCDS.2022.3152383en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record