Depth-guided deep filtering network for efficient single image bokeh rendering
Publisher
DOI
10.1007/s00521-023-08852-y
Journal
NEURAL COMPUTING & APPLICATIONS
ISSN
0941-0643
Metadata
Show full item recordAbstract
Bokeh effect is usually used to highlight major contents in an image. Limited by the small sensors, cameras on smartphones are less sensitive to the depth information and cannot directly produce bokeh effect as pleasant as digital single lens reflex cameras. To address this problem, a depth-guided deep filtering network, called DDFN, is proposed in this study. Specifically, the focused region detection block is designed to detect the salient areas, and the depth estimated block is introduced to estimate depth maps from full-focus images. Further, combining depth maps and focused features, an adaptive rendering block is proposed to synthesize bokeh effect with adaptive cross 1-D filters. Both quantitative and qualitative evaluations on the public datasets demonstrate that the proposed model performs favorably against state-of-the-art methods in terms of rendering effects and has lower computational cost, e.g., 24.07 dB PSNR on EBB! dataset and 0.45 s inference times for a 512×768
image on a Snapdragon 865 mobile processor.