Show simple item record

dc.contributor.authorLin, Jen_US
dc.contributor.authorHuang, Xen_US
dc.contributor.authorZhou, Hen_US
dc.contributor.authorWang, Yen_US
dc.contributor.authorZhang, Qen_US
dc.date.accessioned2023-09-14T09:48:47Z
dc.date.available2023-08-07en_US
dc.date.issued2023-10en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/90706
dc.description.abstractAutomated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.en_US
dc.format.extent102929 - ?en_US
dc.languageengen_US
dc.relation.ispartofMed Image Analen_US
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.subjectReceptive fielden_US
dc.subjectRetinal blood vessel segmentationen_US
dc.subjectStimulus-guided adaptive feature fusionen_US
dc.subjectStimulus-guided adaptive pooling transformeren_US
dc.subjectVisual cortexen_US
dc.subjectHumansen_US
dc.subjectRetinal Vesselsen_US
dc.subjectFaceen_US
dc.subjectFundus Oculien_US
dc.titleStimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images.en_US
dc.typeArticle
dc.identifier.doi10.1016/j.media.2023.102929en_US
pubs.author-urlhttps://www.ncbi.nlm.nih.gov/pubmed/37598606en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume89en_US
dcterms.dateAccepted2023-08-07en_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Attribution 3.0 United States
Except where otherwise noted, this item's license is described as Attribution 3.0 United States