融合注意力机制和上下文信息的微光图像增强
An attention mechanism and contextual information based low-light image enhancement method
查看参考文献27篇
文摘
|
目的微光图像存在低对比度、噪声伪影和颜色失真等退化问题,造成图像的视觉感受质量较差,同时也导致后续图像识别、分类和检测等任务的精度降低。针对以上问题,提出一种融合注意力机制和上下文信息的微光图像增强方法。方法为提高运算精度,以U型结构网络为基础构建了一种端到端的微光图像增强网络框架,主要由注意力机制编/解码模块、跨尺度上下文模块和融合模块等组成。由混合注意力块(包括空间注意力和通道注意力)引导主干网络学习,其空间注意力模块用于计算空间位置的权重以学习不同区域的噪声特征,而通道注意力模块根据不同通道的颜色信息计算通道权重,以提升网络的颜色信息重建能力。此外,跨尺度上下文模块用于聚合各阶段网络中的深层和浅层特征,借助融合机制来提高网络的亮度和颜色增强效果。结果本文方法与现有主流方法进行定量和定性对比实验,结果显示本文方法显著提升了微光图像亮度,并且较好保持了图像颜色一致性,原微光图像较暗区域的噪点显著去除,重建图像的纹理细节清晰。在峰值信噪比(peak signal-to-noise ratio, PSNR)、结构相似性(structural similarity,SSIM)和图像感知相似度(perceptual image patch similarity,LPIPS)等客观指标上,本文方法较其他方法的最优值分别提高了0.74 dB、0.153和0.172。结论本文方法能有效解决微光图像存在的曝光不足、噪声干扰和颜色不一致等问题,具有一定应用价值。 |
其他语种文摘
|
Objective Low-light images capturing is common used in the night or dark scenario. In a low-light imaging scenario, the number of photons penetrating the lens is tiny, resulting in poor image quality, poor visibility, low contrast, more noise, and color distortion. Low-light images constrain many subsequent image processing tasks on the aspects of image classification, target recognition, intelligent monitoring and object recognition. The current low-light level image enhancement technology has mainly two challenging issues as following: 1) space-independent image brightness; 2) nonuniform noise. The spatial feature distribution of real low-light images is complex, and the number of penetrating photons varies greatly at different spatial positions, resulting in strong light-related variability in the image space. Existing deep learning methods can effectively improve the illumination characteristics in artificially generated datasets, but the overall image visibility and the enhancement effect of underexposed areas need to be improved for the space-restricted low illumination. For instance, de-noising will ignore some image details before image enhancement. It is difficult to reconstruct highnoise pixel information, and de-noising will easily lead to blurred images after enhancement. Our attention mechanism and contextual information based low-light image enhancement method can fully enhance the image to suppress potential noise and maintain color consistency. Method Our demonstration constructs an end-to-end low-light enhancement network, using U-Net as the basic framework, which mainly includes channel attention modules, encoders, decoders, cross-scale context modules and feature fusion modules. The backbone network is mainly guided by mixed attention block. The input is the low-light image to be enhanced, and the output is an enhanced high-quality noise-free color image of the same size. After the low-light enhancement network extracts the shallow features of the input original low-light image, it first uses the channel attention module to learn the weight of each channel, and assigns more weight to the useful color information channel to extract more effective image color features. Then, the channel weight and the original low-light image are used as the input of the encoder, and the semantic features of the image are extracted by the mixed attention block. Mixed attention block extracts input features from spatial information and channel information respectively. It is beneficial to restore the brightness and color information of the image and noise suppression via the location of noise in the image and the color features of different channels. In the decoder part, the de-convolution module restores the semantic features extracted by mixed attention block to high-resolution images. In addition, a cross-scale context module is designed to fuse cross-scale skip connections based on the skip connections performed on the corresponding scale features of conventional encoders and decoders. Result In respect of qualitative evaluation, our method is slightly higher than the reference image in fitting the overall contrast of the image, but it is better than all other contrast methods in terms of color details and noise suppression. In terms of quantitative evaluation indicators of image quality, including peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual image patch similarity (LPIPS). |
来源
|
中国图象图形学报
,2022,27(5):1565-1576 【核心库】
|
DOI
|
10.11834/jig.210583
|
关键词
|
图像处理
;
微光图像增强
;
深度学习
;
注意力机制
;
上下文信息
|
地址
|
1.
西安理工大学印刷包装与数字媒体学院, 西安, 710048
2.
中国科学院西安光学精密机械研究所, 中国科学院光谱成像技术重点实验室, 西安, 710119
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
1006-8961 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
陕西省重点研发计划资助
;
中国科学院光谱成像技术重点实验室基金项目
|
文献收藏号
|
CSCD:7192314
|
参考文献 共
27
共2页
|
1.
Cao H.
Swin-Unet: unet-like pure transformer for medical image segmentation,2021
|
CSCD被引
14
次
|
|
|
|
2.
Celik T. Spatial entropy-based global and local image contrast enhancement.
IEEE Transactions on Image Processing,2014,23(12):5298-5308
|
CSCD被引
15
次
|
|
|
|
3.
Chen C. Learning to see in the dark.
Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:3291-3300
|
CSCD被引
10
次
|
|
|
|
4.
Guo C L. Zero-reference deep curve estimation for low-light image enhancement.
Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2020:1777-1786
|
CSCD被引
13
次
|
|
|
|
5.
Guo X J. LIME: low-light image enhancement via illumination map estimation.
IEEE Transactions on Image Processing,2017,26(2):982-993
|
CSCD被引
209
次
|
|
|
|
6.
黄鐄. 条件生成对抗网络的低照度图像增强方法.
中国图象图形学报,2019,24(12):2149-2158
|
CSCD被引
10
次
|
|
|
|
7.
Jiang Y F. EnlightenGAN: deep light enhancement without paired supervision.
IEEE Transactions on Image Processing,2021,30:2340-2349
|
CSCD被引
127
次
|
|
|
|
8.
Jobson D J. Properties and performance of a center/surround retinex.
IEEE Transactions on Image Processing,1997,6(3):451-462
|
CSCD被引
348
次
|
|
|
|
9.
Jobson D J. A multiscale retinex for bridging the gap between color images and the human observation of scenes.
IEEE Transactions on Image Processing,1997,6(7):965-976
|
CSCD被引
359
次
|
|
|
|
10.
Johnson J. Perceptual losses for real-time style transfer and super-resolution.
Proceedings of the 14th European Conference on Computer Vision,2016:694-711
|
CSCD被引
66
次
|
|
|
|
11.
Kim W. Contrast enhancement using histogram equalization based on logarithmic mapping.
Optical Engineering,2012,51(6):#067002
|
CSCD被引
10
次
|
|
|
|
12.
Lai W S. Fast and accurate image super-resolution with deep laplacian pyramid networks.
IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,41(11):2599-2613
|
CSCD被引
46
次
|
|
|
|
13.
Lee C. Contrast enhancement based on layered difference representation of 2D histograms.
IEEE Transactions on Image Processing,2013,22(12):5372-5384
|
CSCD被引
52
次
|
|
|
|
14.
李权合. 基于Retinex和视觉适应性的图像增强.
中国图象图形学报,2010,15(12):1728-1732
|
CSCD被引
5
次
|
|
|
|
15.
Li M D. Structurerevealing low-light image enhancement via robust retinex model.
IEEE Transactions on Image Processing,2018,27(6):2828-2841
|
CSCD被引
94
次
|
|
|
|
16.
Lore K G. LLNet: a deep autoencoder approach to natural low-light image enhancement.
Pattern Recognition,2017,61:650-662
|
CSCD被引
115
次
|
|
|
|
17.
Lyu F F. MBLLEN: low-light image/video enhancement using CNNs.
Proceedings of the 29th British Machine Vision Conference 2018,2018:1-13
|
CSCD被引
1
次
|
|
|
|
18.
Ma K D. Perceptual quality assessment for multi-exposure image fusion.
IEEE Transactions on Image Processing,2015,24(11):3345-3356
|
CSCD被引
65
次
|
|
|
|
19.
Vonikakis V. Fast centre-surround contrast modification.
IET Image Processing,2008,2(1):19-34
|
CSCD被引
9
次
|
|
|
|
20.
Wang R X. Underexposed photo enhancement using deep illumination estimation.
Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2019:6842-6850
|
CSCD被引
7
次
|
|
|
|
|