ABSTRACT
To efficiently preserve texture and target information in source images, an image fusion algorithm of Regional Fusion Factor-Based Union Gradient and Contrast Generative Adversarial Network (R2F-UGCGAN) is proposed. Firstly, an adaptive gradient diffusion (AGD) decomposition algorithm is designed to extract representative features. A pair of infrared (IR) and visible (VIS) images are decomposed by AGD to obtain low-frequency components with salient targets and high-frequency components with rich edge gradient information. Secondly, In the high-frequency components, principal component analysis (PCA) is used for fusion to obtain more detailed images with texture gradients. R2F-UGCGAN is used to fuse the low-frequency components, which can effectively ensure good consistency between the target region and the background region. Therefore, a fused image is produced, which inherits more thermal radiation information and important texture details. Finally, subjective and objective comparison experiments are performed on TNO and RoadScene datasets with state-of-the-art image fusion methods. The experimental results of R2F-UGCGAN are prominent and consistent compared to these fusion algorithms in terms of both subjective and objective evaluation.
Disclosure statement
No potential conflict of interest was reported by the author(s).