Show simple item record

dc.contributor.authorChen, R
dc.contributor.authorZheng, B
dc.contributor.authorZhang, H
dc.contributor.authorChen, Q
dc.contributor.authorYan, C
dc.contributor.authorSlabaugh, G
dc.contributor.authorYuan, S
dc.date.accessioned2024-02-09T16:12:07Z
dc.date.available2024-02-09T16:12:07Z
dc.date.issued2023-06-27
dc.identifier.isbn9781577358800
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/94548
dc.description.abstractReconstructing a High Dynamic Range (HDR) image from several Low Dynamic Range (LDR) images with different exposures is a challenging task, especially in the presence of camera and object motion. Though existing models using convolutional neural networks (CNNs) have made great progress, challenges still exist, e.g., ghosting artifacts. Transformers, originating from the field of natural language processing, have shown success in computer vision tasks, due to their ability to address a large receptive field even within a single layer. In this paper, we propose a transformer model for HDR imaging. Our pipeline includes three steps: alignment, fusion, and reconstruction. The key component is the HDR transformer module. Through experiments and ablation studies, we demonstrate that our model outperforms the state-of-the-art by large margins on several popular public datasets.en_US
dc.format.extent340 - 349
dc.publisherAssociation for the Advancement of Artificial Intelligenceen_US
dc.titleImproving Dynamic HDR Imaging with Fusion Transformeren_US
dc.typeConference Proceedingen_US
dc.rights.holdert © 2023, Association for the Advancement of Artifcial Intelligence (www.aaai.org). All rights reserved.
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume37en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record