Abstract:In response to common image degradation issues in underwater complex environments, such as low light, color distortion, and blurring, this paper proposes an image enhancement model based on multi-input fusion. First, by combining a standard underwater input image with white balance processing and a denoised input image with contrast enhancement, the model generates corresponding weights by utilizing image degradation information and relying solely on the original image to effectively address the restrictive effects of the underwater medium. Four types of weight maps are then designed to optimize the visibility of distant objects, which is affected by light scattering and absorption, thus improving the overall visual quality and detail representation of the image. Finally, through a multi-scale fusion process, the model progressively merges features at different scales, reducing artifacts and enhancing image details. Experimental results show that the proposed model achieves average values of 0.660 3 for UCIQE, 4.556 9 for UIQM, and 7.434 1 for information entropy on the UIEB, EUVP, and RUIE datasets. Compared with other typical and novel algorithms, the proposed model outperforms in color distortion correction, detail enrichment, contrast enhancement, and subjective visual judgment, validating its superiority and robustness in underwater image enhancement.