Quantcast

Infrared and visible image fusion based on NSCT and stacked sparse autoencoders

Research paper by Xiaoqing Luo, Xinyi Li, Pengfei Wang, Shuhan Qi, Jian Guan, Zhancheng Zhang

Indexed on: 30 May '18Published on: 29 May '18Published in: Multimedia Tools and Applications



Abstract

To integrate the infrared object into the fused image effectively, a novel infrared (IR) and visible (VI) image fusion method by using nonsubsampled contourlet transform (NSCT) and stacked sparse autoencoders (SSAE) is proposed. Firstly, the IR and VI images are decomposed into low-frequency subbands and high-frequency subbands by using NSCT. Secondly, SSAE is performed on the low frequency subband of IR image to calculate the object reliabilities (OR) of the low frequency subband coefficients. Subsequently, an adaptive multi-strategy fusion rule based on OR is designed for the fusion of low frequency subbands and a choose-max fusion rule with the absolute values of high frequency subband coefficients are employed for the fusion of high frequency subbands. Experimental results show the proposed method is superior to the conventional methods in highlighting the infrared objects as well as keeping the background information in VI image.