Infrared and visible image fusion method based on deep multi-classification generative adversarial network
DOI:
CSTR:
Author:
Affiliation:

1. Sichuan University of Science and Engineering, Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, 644000, China; 2. Chengdu Xice Defence Technology CO. LTD, Chengdu 610000, China; 3. Sichuan University of Science and Engineering, Key Laboratory of Higher Education of Sichuan Province for Enterprise Informationalization and Internet of Things, Yibin 644000, China

Clc Number:

TP391.7

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In order to achieve a good balance between infrared and visible image information, this paper proposes a deep multi-classification method of infrared and visible image fusion based on generative adversarial network technology. In this method, the idea of principal and auxiliary is introduced into the gradient and intensity information extraction of generator, and the depth and shallow network information extraction ability of generator convolution layer is improved. In the discriminator, multiple classifiers are used to estimate the distribution of visible and infrared regions simultaneously. After continuous face-off learning, the fusion results have remarkable contrast and rich texture details. The obtained information entropy and Shannon entropy value are 6.86, mutual information value is 13.72, standard deviation value is 34.82 and structural similarity value is 0.71.The experimental results show that the proposed method achieves better performance of infrared and visible image fusion in subjective and objective evaluation.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: June 12,2024
  • Published:
Article QR Code