Baodi Liu, Jing Tian, Zhenlong Wang, et al., “DCUGAN: dual contrastive learning GAN for unsupervised underwater image enhancement,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–11, xxxx. DOI: 10.23919/cje.2023.00.257
Citation: Baodi Liu, Jing Tian, Zhenlong Wang, et al., “DCUGAN: dual contrastive learning GAN for unsupervised underwater image enhancement,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–11, xxxx. DOI: 10.23919/cje.2023.00.257

DCUGAN: Dual Contrastive Learning GAN for Unsupervised Underwater Image Enhancement

  • Most existing deep learning-based underwater image enhancement methods rely heavily on synthetic paired underwater images, which limits their practicality and generalization. Unsupervised underwater image enhancement methods can be trained on unpaired data, overcoming the reliance on paired data. However, existing unsupervised methods suffer from poor color correction capability, artifacts, and blurry details in the generated images. Therefore, this paper proposes a dual GAN network with contrastive learning constraints to achieve unsupervised underwater image enhancement. Firstly, we construct a dual GAN network for image transformation. Secondly, we utilize patch-based learning to maximize the mutual information between inputs and outputs, eliminating the reliance on paired data. Thirdly, we use image gradient difference loss to mitigate artifacts in the generated images. Lastly, to address the problem of blurry details, we incorporate channel attention in the generator network to focus on more important content and improve the quality of the generated images. Extensive experiments demonstrate that the enhanced results of our method show amelioration in visual quality.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return