自适应融合层级特征的混合退化图像复原算法
Mixed Degraded Image Restoration Algorithm Based on Adaptive Fusion of Hierarchical Features
查看参考文献25篇
文摘
|
多种退化类型混合的图像比单一类型的退化图像降质更严重,很难建立精确模型对其复原,研究端到端的神经网络算法是复原的关键.现有的基于操作选择注意力网络的算法(operation-wise attention network,OWAN)虽然有一定的性能提升,但是其网络过于复杂,运行较慢,复原图像缺乏高频细节,整体效果也有提升的空间.针对这些问题,提出一种基于层级特征融合的自适应复原算法.该算法直接融合不同感受野分支的特征,增强复原图像的结构;用注意力机制对不同层级的特征进行动态融合,增加模型的自适应性,降低了模型冗余;另外,结合1 L损失和感知损失,增强了复原图像的视觉感知效果.在DIV2K,BSD500等数据集上的实验结果表明,该算法无论是在峰值信噪比和结构相似性上的定量分析,还是在主观视觉质量方面,均优于OWAN算法,充分证明了该算法的有效性. |
其他语种文摘
|
The degradation of mixed degraded images is more serious than that of single degradation types,and it is difficult to restore them by precise modeling.The key to restore mixed degraded images is to study the end-to-end neural network algorithm.Although the existing operation-wise attention network (OWAN) algorithm has a certain performance improvement,its network is too complex,it runs slowly,the restored image lacks high-frequency details,and the overall effect also has room for improvement.To solve these problems,an adaptive restoration algorithm based on hierarchical feature fusion is proposed.The algorithm directly fuses the features of different receptive field branches to enhance the structure of the restored image.The attention mechanism is used to dynamically fuse the features of different hierarchies to increase the adaptability and reduce the redundancy of the model.In addition,combining the L_1 loss and perception loss,the visual perception effect of the restored image is enhanced.Experimental results on DIV2K,BSD500 and other data sets show that the proposed algorithm is better than the OWAN algorithm in terms of quantitative analysis of peak signal-to-noise ratio(PSNR) and structural similarity (SSIM),as well as subjective visual quality. |
来源
|
计算机辅助设计与图形学学报
,2021,33(2):215-222 【核心库】
|
DOI
|
10.3724/sp.j.1089.2021.18482
|
关键词
|
自适应复原
;
混合退化
;
层级特征融合
;
感知损失
|
地址
|
1.
昆明理工大学信息工程与自动化学院, 昆明, 650500
2.
中国科学院云南天文台, 昆明, 650216
3.
昆明理工大学, 云南省人工智能重点实验室, 昆明, 650500
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
1003-9775 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
云南省重大科技专项计划
|
文献收藏号
|
CSCD:6924801
|
参考文献 共
25
共2页
|
1.
Zhang K. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising.
IEEE Transactions on Image Processing,2017,26(7):3142-3155
|
CSCD被引
301
次
|
|
|
|
2.
Chakrabarti A. A neural approach to blind motion deblurring.
Proceedings of European Conference on Computer Vision,2016:221-235
|
CSCD被引
4
次
|
|
|
|
3.
Lehtinen J. Noise2Noise: learning image restoration without clean data.
Proceedings of the 35th International Conference on Machine Learning,2018:2965-2794
|
CSCD被引
3
次
|
|
|
|
4.
Soltanayev S. Training deep learning based denoisers without ground truth data.
Proceedings of the 32nd International Conference on Neural Information Processing Systems,2018:3257-3267
|
CSCD被引
1
次
|
|
|
|
5.
Yu K. Crafting a toolchain for image restoration by deep reinforcement learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:2443-2452
|
CSCD被引
2
次
|
|
|
|
6.
Suganuma M. Attention-based adaptive selection of operations for image restoration in the presence of unknown combined distortions.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2019:9301-9040
|
CSCD被引
1
次
|
|
|
|
7.
Zhang Y L. Residual dense network for image restoration.
Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence,2020:1-16
|
CSCD被引
1
次
|
|
|
|
8.
Chen C. Deep boosting for image denoising.
Proceedings of European Conference on Computer Vision,2018:3-19
|
CSCD被引
1
次
|
|
|
|
9.
Huang Y W.
Densely connected high order residual network for single frame image super resolution,2020
|
CSCD被引
1
次
|
|
|
|
10.
Li C Y. A cascaded convolutional neural network for single image dehazing.
IEEE Access,2018,6:24877-24887
|
CSCD被引
4
次
|
|
|
|
11.
Song Y D. Dynamic residual dense network for image denoising.
Sensors,2019,19(17):3809
|
CSCD被引
1
次
|
|
|
|
12.
Tong T. Image super-resolution using dense skip connections.
Proceedings of the IEEE International Conference on Computer Vision,2017:4809-4817
|
CSCD被引
6
次
|
|
|
|
13.
Lu Y.
Channel attention and multilevel features fusion for single image super-resolution,2020
|
CSCD被引
1
次
|
|
|
|
14.
Liu D. Non-local recurrent network for image restoration.
Proceedings of the 32nd International Conference on Neural Information Processing Systems,2018:1680-1689
|
CSCD被引
8
次
|
|
|
|
15.
Tang H. Multi-channel attention selection GAN with cascaded semantic guidance for cross-view image translation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2019:2417-2426
|
CSCD被引
1
次
|
|
|
|
16.
Huang G. Densely connected convolutional networks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:2261-2269
|
CSCD被引
72
次
|
|
|
|
17.
Johnson J.
Perceptual losses for real-time style transfer and super-resolution,2020
|
CSCD被引
2
次
|
|
|
|
18.
Kupyn O. DeblurGAN: blind motion deblurring using conditional adversarial networks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:8186-8192
|
CSCD被引
1
次
|
|
|
|
19.
Ledig C. Photo-realistic single image super-resolution using a generative adversarial network.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:105-114
|
CSCD被引
30
次
|
|
|
|
20.
Simonyan K.
Very deep convolutional networks for large-scale image recognition,2020
|
CSCD被引
126
次
|
|
|
|
|