Citation: | MA Xiaoyi, CHEN Yihong, WANG Fei, XIE Shuo. Application of Structure Tensor-Based Image Fusion Method in Marine Exploration[J]. Journal of Unmanned Undersea Systems, 2025, 33(1): 84-91. doi: 10.11993/j.issn.2096-3920.2024-0066 |
[1] |
邱志明, 孟祥尧, 马焱, 等. 海上无人系统发展及关键技术研究[J]. 中国工程科学, 2023, 25(3): 74-83.
QIU Z M, MENG X Y, MA Y, et al. Development and key technologies of maritime unmanned systems[J]. Strategic Study of Chinese Academy of Engineering, 2023, 25(3): 74-83.
|
[2] |
贲可荣, 王斌. 海洋装备智能化与智能化装备思考[J]. 江苏科技大学学报(自然科学版), 2021, 35(2): 1-11.
BEN K R, WANG B. Thinking on the intellectualization of marine equipment and marine intelligent equipment[J]. Journal of Jiangsu University of Science and Technology (Natural Science Edition), 2021, 35(2): 1-11.
|
[3] |
张蕾. 红外与可见光图像融合技术研究[D]. 长春: 中国科学院研究生院(长春光学精密机械与物理研究所), 2015.
|
[4] |
TOET A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters, 1989, 9(4): 245-253.
|
[5] |
PAJARES G, JESÚS M D L C. A wavelet-based image fusion tutorial[J]. Pattern Recognition, 2004, 37(9): 1855-1872.
|
[6] |
杨晓慧, 金海燕, 焦李成. 基于DT-CWT的红外与可见光图像自适应融合[J]. 红外与毫米波学报, 2007, 26(6): 419-424. doi: 10.3321/j.issn:1001-9014.2007.06.005
YANG X H, JIN H Y, JIAO L C. Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT[J]. Journal of Infrared and Millimeter Waves, 2007, 26(6): 419-424. doi: 10.3321/j.issn:1001-9014.2007.06.005
|
[7] |
LI S, KANG X, HU J. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875.
|
[8] |
BAVIRISETTI D P, DHULI R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors Journal, 2015, 16(1): 203-209.
|
[9] |
KUMAR B K S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 5: 1193-1204.
|
[10] |
MA J, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. doi: 10.1016/j.inffus.2016.02.001
|
[11] |
LI H, WU X. Infrared and visible image fusion using latent low-rank representation[EB/OL]. [2018-01-29]. https://arxiv.org/abs/1804.08992.
|
[12] |
BAVIRISETTI D P, XIAO G, ZHAO J, et al. Multi-scale guided image and video fusion: A fast and efficient approach[J]. Circuits, Systems, and Signal Processing, 2019, 38(1): 5576-5605.
|
[13] |
BAVIRISETTI D P, DHULI R. Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics & Technology, 2016, 76: 52-64.
|
[14] |
罗迪. 无人机平台下的可见光与热红外图像融合和目标检测研究[D]. 南京: 南京航空航天大学, 2021.
|
[15] |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
|
[16] |
MA J, YU W, LIANG P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.
|
[17] |
XU H, LIANG P, YU W, et al. Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao, China: IJCAI, 2019.
|
[18] |
PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//2017 IEEE International Conference on Computer Vision. Venice, Italy: ICCV, 2017.
|
[19] |
LI H, WU X J. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2018, 28(5): 2614-2623.
|
[20] |
LI H, WU X J, KITTLER J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 2: 72-86.
|
[21] |
TANG W, HE F, LIU Y. TCCFusion: An infrared and visible image fusion method based on transformer and cross correla-tion[J]. Pattern Recognition, 2023, 137: 109295.
|
[22] |
LI H, WU X J, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]//2018 24th International Conference on Pattern Recognition. Beijing, China: ICPR, 2018.
|
[23] |
LONG Y, JIA H, ZHONG Y, et al. RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Information Fusion, 2021, 69(1): 128-141.
|
[24] |
MA J, TANG L, XU M, et al. STDFusionNet: An infrared and visible image fusion network based on salient target detection[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70(1): 1-13.
|
[25] |
LIU J, FAN X, JIANG J, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 99(1): 105-119.
|
[26] |
TANG L, ZHANG H, XU H, et al. Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity[J]. Information Fusion, 2023, 99:101870.
|
[27] |
JUNG H, KIM Y, JANG H, et al. Unsupervised deep image fusion with structure tensor representations[J]. IEEE Transactions on Image Processing, 2020, 29(99): 3845-3858.
|
[28] |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: CVPR, 2016.
|
[29] |
IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift[EB/OL]. [2015-02-11]. https://arxiv.org/abs/1502.03167.
|
[30] |
MISRA D. Mish: A self regularized non-monotonic activation function[EB/OL]. [2019-08-23]. https://arxiv.org/abs/1908.08681.
|
[31] |
LIN M, CHEN Q, YAN S. Network In Network[EB/OL]. [2013-11-16]. https://arxiv.org/abs/1312.4400.
|
[32] |
WANG Z, BOVIK A C. A universal image quality index[J]. IEEE Signal Processing Letters, 2002, 9(3): 81-84. doi: 10.1109/97.995823
|
[33] |
唐霖峰, 张浩, 徐涵, 等. 基于深度学习的图像融合方法综述[J]. 中国图象图形学报, 2023, 28(1): 3-36. doi: 10.11834/jig.220422
TANG L F, ZHANG H, XU H, et al. Deep learning-based image fusion: A survey[J]. Journal of Image and Graphics, 2023, 28(1): 3-36. doi: 10.11834/jig.220422
|
[34] |
TOET A. The TNO multiband image data collection[J]. Data in Brief, 2017, 15: 249-251. doi: 10.1016/j.dib.2017.09.038
|