Volume 29 Issue 6
Dec.  2020
Turn off MathJax
Article Contents
QIANG Zhenping, HE Libo, DAI Fei, et al., “Image Inpainting Based on Improved Deep Convolutional Auto-encoder Network,” Chinese Journal of Electronics, vol. 29, no. 6, pp. 1074-1084, 2020, doi: 10.1049/cje.2020.09.008
Citation: QIANG Zhenping, HE Libo, DAI Fei, et al., “Image Inpainting Based on Improved Deep Convolutional Auto-encoder Network,” Chinese Journal of Electronics, vol. 29, no. 6, pp. 1074-1084, 2020, doi: 10.1049/cje.2020.09.008

Image Inpainting Based on Improved Deep Convolutional Auto-encoder Network

doi: 10.1049/cje.2020.09.008
Funds:  This work is supported by the Yunnan Fundamental Research Projects (No.202001AT070135, No.202002AD080002, No.2019ZE005), and Key Scientific Research Foundation Project of Southwest Forestry University (No.111827).
  • Received Date: 2019-10-17
  • Publish Date: 2020-12-25
  • This paper proposes an effective image inpainting method using an improved deep convolutional auto-encoder network. By analogy with exiting methods of image inpainting based on auto-decoders, inpainting methods using the deep convolutional auto-encoder networks are significantly more effective in capturing high-level features than classical methods based on exemplar. However, the inpainted regions would appear blurry and global inconsistency. To alleviate the fuzzy problem, we improved the network model by adding skip connections between mirrored layers in encoder and decoder stacks, so that the generative process of the inpainting area can directly use the low-level features information of the processing image. For making the inpainted result look both more plausible and consistent with its surrounding contexts, the model is trained with a combination of standard pixel-wise reconstruction loss and two adversarial losses which ensures pixel-accurate and local-global contents consistency. With extensive experimental on the ImageNet and Paris Streetview datasets, we demonstrate qualitatively and quantitatively that our approach performs better than state of the art.
  • loading
  • C. Barnes, E. Shechtman, A. Finkelstein, et al., "PatchMatch:A randomized correspondence algorithm for structural image editing", ACM Transactions on Graphics, Vol.28, No.3, Article No.24, 2009.
    Z.P. Qiang, L.B. He, and D. Xu, "Exemplar-based pixel by pixel inpainting based on patch shift", Proc. of CCF Chinese Conference on Computer Vision, Tianjin, China, pp.370-382, 2017.
    J.F. Li, S.H. Liao and C.X. Mei, "A mean shift algorithm and interpolation image restoration algorithm based method for metal artifact reduction", Acta Electronica Sinica, Vol.45, No.8, pp.1919-1924, 2017.
    Z.D. Li, J.X. Cheng and J.W. Liu, "MRF image inpainting algorithm based on structure offsets statistic and multidirection features", Acta Electronica Sinica, Vol.48, No.5, pp.985-989, 2020.
    J. Hays and A. A. Efros, "Scene completion using millions of photographs", ACM Transactions on Graphics, Vol.26, No.3, Article No.4, 2007.
    Y. LeCun and L. Bottou, "Gradient-based learning applied to document recognition", Proc. of the IEEE, Vol.86, No.11, pp.2278-2324, 1998.
    O. Vinyals, A. Toshev, S. Bengio et al., "Show and tell:A neural image caption generator", Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, VenueBoston, USA, pp.3156-3164, 2015.
    O. Vinyals, A. Toshev, S. Bengio et al., "Image style transfer using convolutional neural networks", Proc. of the IEEE conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp.2414-2423, 2016.
    I.J. Goodfellow, J. Pouget-Abadie and M. Mirza, "Generative adversarial nets", Advances in Neural Information Processing Systems, pp.2672-2680, 2014.
    D. Pathak, P. Krahenbuhl, J. Donahue et al., "Context encoders:Feature learning by inpainting", Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp.2536-2544, 2016.
    J. Masci, M. Ueli, D. Cireşan et al., "Stacked convolutional auto-encoders for hierarchical feature extraction", Proc. of the International Conference on Artificial Neural Networks, Espoo, Finland, pp.52-59, 2011.
    R. Pavel, P. Thomas and K. Arjan, "Style-transfer GANs for bridging the domain gap in synthetic pose estimator training", arXiv preprint, arXiv:2004.13681v1, 2020.
    A.S. Hashemi and S. Mozaffari, "Secure deep neural networks using adversarial image generation and training with noiseGAN", Computers and Security, Vol.86, pp.372-387, 2019.
    T. Liu, K. Haan, Y. Rivenson, et al., "Deep learning-based super-resolution in coherent imaging systems", Scientific Reports, Vol.9, No.1, doi:10.1038/s41598-019-40554-1, 2019.
    S. Wang, D. Quan, X. Liang, et al., "A deep learning framework for remote sensing image registration", ISPRS Journal of Photogrammetry and Remote Sensing, Vol.145, Part A, pp.148-164, 2018.
    V. Badrinarayanan, A. Kendall and R. Cipolla, "SegNet:A deep convolutional encoder-decoder architecture for image segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.39, No.12, pp.2481-2481, 2017.
    Y. Li, S. Liu, J. Yang, et al., "Generative face completion", Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp.3911-3919, 2017.
    C. Yang, X. Lu, Z. Lin, et al.? "High-resolution image inpainting using multi-scale neural patch synthesis", Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp.6721-6729, 2017.
    R. Olaf, F. Philipp and B. Thomas, "U-net:Convolutional networks for biomedical image segmentation", Proc. of the International Conference on Medical Image Computing and Computer-assisted Intervention, Munich, Germany, pp.234-241, 2015.
    B. Vijay, H. Ankur and C. Roberto, "Segnet:A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling", arXiv preprint, arXiv:1505.07293, 2015.
    D.E. Rumelhart, G.E. Hinton and R.J. Williams, "Learning internal representations by error propagation", Neurocomputing:Foundations of Research, pp.318-362, 1988.
    G.E. Hinton and R.R. Salakhutdinov, "Reducing the dimensionality of data with neural networks", Science, Vol.313, No.5786, pp.504-507, 2006.
    W. Yan-Bin, Y. Zhu-Hong, L. Li-Ping, et al., "Improving prediction of self-interacting proteins using stacked sparse auto-encoder with PSSM profiles", International Journal of Biological Sciences, Vol.14, No.8, pp.983-991, 2018.
    B. Du, W. Xiong, J. Wu, et al., "Stacked convolutional denoising auto-encoders for feature representation", IEEE Transactions on Cybernetics, Vol.47, No.4, pp.1017-1027, 2017.
    R. Salah, V. Pascal, M. Xavier, et al., "Contractive autoencoders:Explicit invariance during feature extraction", Proc. of the 28th International Conference on International Conference on Machine Learning, Washington, USA, pp.833-840, 2011.
    M. Chen, Z. Xu, W. Kilian, et al., "Marginalized denoising autoencoders for domain adaptation", arXiv preprint, arXiv:1206.4683, 2012.
    Y. Bengio, "Learning deep architectures for AI", Foundations and Trends® in Machine Learning, Vol.2, No.1, pp.1-127, 2009.
    T. Hinz, M. Fisher, O. Wang, et al., "Improved techniques for training single-image GANs", arXiv preprint, arXiv:2003.11512v1, 2020.
    X. Wang and G. Abhinav, "Generative image modeling using style and structure adversarial networks", Proc. of the European Conference on Computer Vision, Amsterdam, The Netherlands, pp.318-335, 2016.
    U. Dmitry, L. Vadim, V. Andrea, et al., "Texture networks:Feed-forward synthesis of textures and stylized images", Proc. of the 33rd International Conference on Machine Learning, New York City, USA, pp.1349-1357, 2016.
    I. Sergey and S. Christian, "Batch normalization:Accelerating deep network training by reducing internal covariate shift", arXiv preprint, arXiv:1502.03167, 2015.
    S. Karen and Z. Andrew, "Very deep convolutional networks for large-scale image recognition", arXiv preprint, arXiv:1409.1556, 2014.
    P. James, C. Ondrej, I. Michael, et al., "Lost in quantization:Improving particular object retrieval in large scale image databases", Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA, pp.1-8, 2008.
    C. Doersch, S. Singh, A. Gupta, et al., "What makes paris look like paris?", ACM Transactions on Graphics, Vol.31, No.4, Article No.101, 2012.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (646) PDF downloads(80) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return