Turn off MathJax
Article Contents
WANG Xuanhong, LI Cong, SUN Zengguo, et al., “Review of GAN-Based Research on Chinese Character Font Generation,” Chinese Journal of Electronics, in press, doi: 10.23919/cje.2022.00.402, 2023.
Citation: WANG Xuanhong, LI Cong, SUN Zengguo, et al., “Review of GAN-Based Research on Chinese Character Font Generation,” Chinese Journal of Electronics, in press, doi: 10.23919/cje.2022.00.402, 2023.

Review of GAN-Based Research on Chinese Character Font Generation

doi: 10.23919/cje.2022.00.402
Funds:  This work was supported by the National Natural Science Foundation of China (No. 61102163), the Fundamental Research Funds for the Central Universities (No. GK202205036), and the Xi’an University of Posts and Telecommunications Graduate Innovation Fund (CXJJYL2022004).
More Information
  • Author Bio:

    Xuanhong WANG is a senior engineer at the School of Telecommunication and Information Engineering, Xi’an University of Posts and Telecommunications, China. He received M.S. degree from Xi’an University of science and technology in 2006. His research interests include integration of computer graphics, pattern recognition and computer vision. (Email: xiyouwxh@163.com)

    Cong LI received the B.S. degree in communication engineering from Xi’an University of Posts and Telecommunications, China, in 2021. He is currently pursuing the master’s degree at Xi’an University of Posts and Telecommunications. His current research interests include computer vision and deep learning. (Email: licong11@stu.xupt.edu.cn)

    Zengguo SUN (corresponding author) received his Ph.D. degree from Xi’an Jiaotong University, China, in 2010. Currently he is an associate professor at the School of Computer Science, Shaanxi Normal University, China. His current research interests include artificial intelligence, computer vision, deep learning, and graphics image processing. (Email: sunzg@snnu.edu.cn)

    Luying HUI received the B.S. degree in electronic information engineering from Xi’an University of Posts and Telecommunications, China, in 2021. She is currently pursuing her master’s degree at Xi’an University of Posts and Telecommunications, China. Her research interests include computer vision and artificial intelligence. (Email: huiluying47@stu.xupt.edu.cn)

  • Received Date: 2022-11-24
  • Accepted Date: 2023-03-14
  • Available Online: 2023-07-14
  • With the rapid development of deep learning, Generative Adversarial Network (GAN) has become a research hotspot in the field of computer vision. GAN has a wide range of applications in image generation. Inspired by GAN, a series of models of Chinese character font generation have been proposed in recent years. In this paper, the latest research progress of Chinese character font generation is analyzed and summarized. Firstly, GAN and its development history are summarized. Secondly, GAN-based methods for Chinese character font generation are clarified as well as their improvements, based on whether the specific elements of Chinese characters are considered. Then, the public datasets used for font generation are summarized in detail, and various application scenarios of font generation are provided. Finally, the evaluation metrics of font generation are systematically summarized from both qualitative and quantitative aspects. This paper contributes to the in-depth research on Chinese character font generation and has a positive effect on the inheritance and development of Chinese culture with Chinese characters as its carrier.
  • loading
  • [1]
    G. D. Li and S. H. Yang, “Chinese character—the culture transmission carrier of Chinese nation,” Journal of Chongqing University of Arts and Sciences (Social Sciences Edition), vol.30, no.3, pp.108–112, 2011. (in Chinese) doi: 10.19493/j.cnki.issn1673-8004.2011.03.021
    [2]
    L. J. Wang, “Several fundemental theoretical issues about cultural study of Chinese words,” Journal of Shaanxi Normal University (Social Science), vol.31, no.5, pp.95–100, 2002. doi: 10.3969/j.issn.1672-4283.2002.05.015
    [3]
    J. F. Chen, Y. L. Ji, H. Chen, et al., “Learning one-to-many stylised Chinese character transformation and generation by generative adversarial networks,” IET Image Processing, vol.13, no.14, pp.2680–2686, 2019. doi: 10.1049/iet-ipr.2019.0009
    [4]
    J. S. Zeng, Q. Chen, Y. X. Liu, et al., “StrokeGAN: Reducing mode collapse in Chinese font generation via stroke encoding,” in Proceedings of the 35th AAAI Conference on Artificial Intelligence, Virtual Event, pp.3270–3277, 2021.
    [5]
    S. Park, S. Chun, J. Cha, et al., “Few-shot font generation with localized style representations and factorization,” in Proceedings of the 35th AAAI Conference on Artificial Intelligence, Virtual Event, pp.2393–2402, 2021.
    [6]
    Y. Jiang, Z. H. Lian, Y. M. Tang, et al., “DCFont: An end-to-end deep Chinese font generation system,” in Proceedings of SIGGRAPH Asia 2017 Technical Briefs, Bangkok, Thailand, article no.22, 2017.
    [7]
    Y. Jiang, Z. H. Lian, Y. M. Tang, et al., “SCFont: Structure-guided Chinese font generation via deep stacked networks,” in Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, HA, USA, pp.4015–4022, 2019.
    [8]
    Y. Xiao, W. L. Lei, L. Lu, et al., “CS-GAN: Cross-structure generative adversarial networks for Chinese calligraphy translation,” Knowledge-Based Systems, vol.229, article no.107334, 2021. doi: 10.1016/j.knosys.2021.107334
    [9]
    Q. Wen, S. Li, B. F. Han, et al., “ZiGAN: Fine-grained Chinese calligraphy font generation via a few-shot style transfer approach,” in Proceedings of the 29th ACM International Conference on Multimedi, Virtual Event, China, pp.621–629, 2021.
    [10]
    S. Yang, J. Y. Liu, W. J. Wang, et al., “TET-GAN: Text effects transfer via stylization and destylization,” in Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, HA, USA, pp.1238–1245, 2019.
    [11]
    W. Li, Y. X. He, Y. W. Qi, et al., “FET-GAN: Font and effect transfer via K-shot adaptive instance normalization,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp.1717–1724, 2020.
    [12]
    S. H. Xu, H. Jiang, T. Jin, et al., “Automatic generation of Chinese calligraphic writings with style imitation,” IEEE Intelligent Systems, vol.24, no.2, pp.44–53, 2009. doi: 10.1109/MIS.2009.23
    [13]
    Y. D. Sun, H. H. Qian, and Y. S. Xu, “A geometric approach to stroke extraction for the Chinese calligraphy robot,” in Proceedings of 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, pp.3207–3212, 2014.
    [14]
    J. H. Yu and Q. S. Peng, “Realistic synthesis of cao shu of Chinese calligraphy,” Computers & Graphics, vol.29, no.1, pp.145–153, 2005. doi: 10.1016/j.cag.2004.11.013
    [15]
    J. S. Zhang, G. H. Mao, H. W. Lin, et al., “Outline font generating from images of ancient Chinese calligraphy,” in Transactions on Edutainment V, Z. G. Pan, A. D. Cheok, W. Müller, et al., Eds. Springer, Berlin Heidelberg, Germany, pp.122–131, 2011.
    [16]
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol.60, no.6, pp.84–90, 2017. doi: 10.1145/3065386
    [17]
    Z. P. Qiang, L. B. He, F. Dai, et al., “Image inpainting based on improved deep convolutional auto-encoder network,” Chinese Journal of Electronics, vol.29, no.6, pp.1074–1084, 2020. doi: 10.1049/cje.2020.09.008
    [18]
    K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp.770–778, 2016.
    [19]
    Y. Tian, “Rewrite: Neural style transfer for Chinese fonts,” Available at: https://github.com/kaonashi-tyc/Rewrite, 2017-04-07. (查阅网上资料, 未找到本条文献作者名信息, 请确认)
    [20]
    Y. X. Zhang, Y. Zhang, and W. B. Cai, “Separating style and content for generalized style transfer,” in Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp.8447–8455, 2018.
    [21]
    I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial networks,” Communications of the ACM, vol.63, no.11, pp.139–144, 2020. doi: 10.1145/3422622
    [22]
    T. C. Wang, M. Y. Liu, J. Y. Zhu, et al., “High-resolution image synthesis and semantic manipulation with conditional GANs,” in Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp.8798–8807, 2018.
    [23]
    G. Q. Jin, Y. D. Zhang, and K. Lu, “Deep hashing based on VAE-GAN for efficient similarity retrieval,” Chinese Journal of Electronics, vol.28, no.6, pp.1191–1197, 2019. doi: 10.1049/cje.2019.08.001
    [24]
    J. W. Li, W. Monroe, T. L. Shi, et al., “Adversarial learning for neural dialogue generation,” in Proceedings of 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp.2157–2169, 2017.
    [25]
    K. Y. Huang, J. X. Cao, Z. Liu, et al., “Word-based method for Chinese part-of-speech via parallel and adversarial network,” Chinese Journal of Electronics, vol.31, no.2, pp.337–344, 2022. doi: 10.1049/cje.2020.00.411
    [26]
    M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint, arXiv: 1411.1784, 2014.
    [27]
    P. Isola, J. Y. Zhu, T. H. Zhou, et al., “Image-to-image translation with conditional adversarial networks,” in Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp.5967–5976, 2017.
    [28]
    J. Y. Zhu, T. Park, P. Isola, et al., “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of 2017 IEEE International Conference on Computer Vision, Venice, Italy, pp.2242–2251, 2017.
    [29]
    Y. Choi, M. Choi, M. Kim, et al., “StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp.8789–8797, 2018.
    [30]
    M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia, pp.214–223, 2017.(查阅网上资料,未找到本条文献DOI信息,请确认)
    [31]
    I. Gulrajani, F. Ahmed, M. Arjovsky, et al., “Improved training of wasserstein GANs,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, pp.5769–5779, 2017.(查阅网上资料,未找到本条文献DOI信息,请确认)
    [32]
    H. Zhang, I. Goodfellow, D. Metaxas, et al., “Self-attention generative adversarial networks,” in Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, pp.7354–7363, 2019.(查阅网上资料,未找到本条文献DOI信息,请确认)
    [33]
    A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” in Proceedings of the 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019.
    [34]
    T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, pp.4396–4405, 2019.
    [35]
    Y. F. Jiang, S. Y. Chang, and Z. Y. Wang, “TransGAN: Two pure transformers can make one strong GAN, and that can scale up,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, virtual, pp.14745–14758, 2021.(查阅网上资料,未找到本条文献DOI信息,请确认)
    [36]
    M. Y. Liu, X. Huang, A. Mallya, et al., “Few-shot unsupervised image-to-image translation,” in Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp.10550–10559, 2019.
    [37]
    Z. T. Jiang and L. L. Qin, “Low-light image enhancement method based on U-Net generative adversarial network,” Acta Electronica Sinica, vol.48, no.2, pp.258–264, 2020. (in Chinese) doi: 10.3969/j.issn.0372-2112.2020.02.005
    [38]
    Y. C. Tian and Chongzhe, “Zi2zi: Master Chinese calligraphy with conditional adversarial networks,” Available at: https://github.com/kaonashi-tyc/zi2zi, 2019-08-09. (查阅网上资料, 未能确认标黄作者名是否正确, 请确认)
    [39]
    P. Y. Lyu, X. Bai, C. Yao, et al., “Auto-encoder guided GAN for Chinese calligraphy synthesis,” in Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition, Kyoto, Japan, pp.1095–1100, 2017.
    [40]
    J. Chang, Y. J. Gu, Y. Zhang, et al., “Chinese handwriting imitation with hierarchical generative adversarial network,” in Proceedings of British Machine Vision Conference 2018, Newcastle, UK, article no.290, 2018.
    [41]
    Y. T. Lei, L. G. Zhou, T. J. Pan, et al., “Learning and generation of personal handwriting style Chinese font,” in Proceedings of 2018 IEEE International Conference on Robotics and Biomimetics, Kuala Lumpur, Malaysia, pp.1909–1914, 2018.
    [42]
    C. Y. Ren, S. J. Lyu, H. J. Zhan, et al., “SAFont: Automatic font synthesis using self-attention mechanisms,” Australian Journal of Intelligent Information Processing Systems, vol.16, no.2, pp.19–25, 2019.
    [43]
    B. Q. Li, H. N. Huang, J. Y. Liu, et al., “Optical image-to-underwater small target synthetic aperture sonar image translation algorithm based on improved CycleGAN,” Acta Electronica Sinica, vol.49, no.9, pp.1746–1753, 2021. (in Chinese) doi: 10.12263/DZXB.20200712
    [44]
    A. Narusawa, W. Shimoda, and K. Yanai, “Font style transfer using neural style transfer and unsupervised cross-domain transfer,” in Proceedings of the 14th Asian Conference on Computer Vision, Perth, Australia, pp.100–109, 2019.
    [45]
    B. Chang, Q. Zhang, S. Y. Pan, et al., “Generating handwritten Chinese characters using CycleGAN,” in Proceedings of 2018 IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, pp.199–207, 2018.
    [46]
    X. Yan, Y. T. Wang, R. Yi, et al., “StarFont: Enabling font completion based on few shots examples,” in Proceedings of the 3rd International Conference on Advances in Artificial Intelligence, Istanbul, Turkey, pp.1–8, 2019.
    [47]
    X. Y. Liu, G. F. Meng, J. L. Chang, et al., “Decoupled representation learning for character glyph synthesis,” IEEE Transactions on Multimedia, vol.24, pp.1787–1799, 2022. doi: 10.1109/TMM.2021.3072449
    [48]
    Y. C. Xie, X. Y. Chen, L. Sun, et al., “DG-font: Deformable generative networks for unsupervised font generation,” in Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp.5126–5136, 2021.
    [49]
    Y. Gao, Y. Guo, Z. H. Lian, et al., “Artistic glyph image synthesis via one-stage few-shot learning,” ACM Transactions on Graphics, vol.38, no.6, article no.185, 2019. doi: 10.1145/3355089.3356574
    [50]
    Y. M. Gao and J. Q. Wu, “GAN-based unpaired Chinese character image translation via skeleton transformation and stroke rendering,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, vol. 34, pp.646–653, 2020.
    [51]
    S. Z. Yuan, R. X. Liu, M. Chen, et al., “SE-GAN: Skeleton enhanced gan-based model for brush handwriting font generation,” in Proceedings of 2022 IEEE International Conference on Multimedia and Expo, Taipei, China, pp.1–6, 2022.
    [52]
    Y. X. Huang, M. C. He, L. W. Jin, et al., “RD-GAN: Few/zero-shot Chinese character style transfer via radical decomposition and rendering,” in Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, pp.156–172, 2020.
    [53]
    Y. X. Kong, C. J. Luo, W. H. Ma, et al., “Look closer to supervise better: One-shot font generation via component-based discriminator,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp.13472–13481, 2022.
    [54]
    W. Liu, F. Y. Liu, F. Ding, et al., “XMP-font: Self-supervised cross-modality pre-training for few-shot font generation,” in Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp.7895–7904, 2022.
    [55]
    C. L. Liu, F. Yin, D. H. Wang, et al., “CASIA online and offline Chinese handwriting databases,” in Proceedings of 2011 International Conference on Document Analysis and Recognition, Beijing, China, pp.37–41, 2011.
    [56]
    HCII, “SCUT-SPCCI,” Available at: http://www.hcii-lab.net/data/scutspcci/download.html, 2011. (查阅网上资料, 未找到本条文献日期信息, 请确认)
    [57]
    Z. Chen, “Chen Zhongjian Chinese calligraphy character dataset,” Available at: http://163.20.160.14/˜word/modules/myalbum/, 2023. (查阅网上资料, 未找到本条文献信息, 请确认)
    [58]
    H. L. Yang, L. W. Jin, W. G. Huang, et al., “Dense and tight detection of Chinese characters in historical documents: Datasets and a recognition guided detector,” IEEE Access, vol.6, pp.30174–30183, 2018. doi: 10.1109/ACCESS.2018.2840218
    [59]
    C. H. Li, Y. Taniguchi, M. Lu, et al., “Few-shot font style transfer between different languages,” in Proceedings of 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, pp.433–442, 2021.
    [60]
    S. Yang, W. J. Wang, and J. Y. Liu, “TE141K: Artistic text benchmark for text effect transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.43, no.10, pp.3709–3723, 2021. doi: 10.1109/TPAMI.2020.2983697
    [61]
    Z. Z. Zheng and F. Y. Zhang, “Coconditional autoencoding adversarial networks for Chinese font feature learning,” arXiv preprint, arXiv: 1812.04451, 2018.
    [62]
    Y. L. Miao, H. H. Jia, and K. X. Tang, “Chinese font migration combining local and global features learning,” Pattern Analysis and Applications, vol.24, no.4, pp.1533–1547, 2021. doi: 10.1007/s10044-021-01003-w
    [63]
    D. H. Sun, Q. Zhang, and J. Yang, “Pyramid embedded generative adversarial network for automated font generation,” in Proceedings of the 24th International Conference on Pattern Recognition, Beijing, China, pp.976–981, 2018.
    [64]
    D. Lv and Y. J. Liu, “Font style conversion based on deep learning,” in Proceedings of 2018 International Conference on Network, Communication, Computer Engineering, Chongqing, China, pp.922–926, 2018.
    [65]
    F. X. Xiao, J. Zhang, B. Huang, et al., “Multiform fonts-to-fonts translation via style and content disentangled representations of Chinese character,” arXiv preprint, arXiv: 2004.03338, 2020.
    [66]
    J. S. Zeng, Q. Chen, and M. W. Wang, “Diversity regularized StarGAN for multi-style fonts generation of Chinese characters,” Conference Series, vol.1880, no.1, article no.012017, 2021. doi: 10.1088/1742-6596/1880/1/012017
    [67]
    L. Wu, X. Chen, L. Meng, et al., “Multitask adversarial learning for Chinese font style transfer,” in Proceedings of 2020 International Joint Conference on Neural Networks, Glasgow, UK, pp.1–8, 2020.
    [68]
    S. H. Zeng and Z. L. Pan, “An unsupervised font style transfer model based on generative adversarial networks,” Multimedia Tools and Applications, vol.81, no.4, pp.5305–5324, 2022. doi: 10.1007/s11042-021-11777-0
    [69]
    Y. Guo, Z. H. Lian, Y. M. Tang, et al., “Creating new Chinese fonts based on manifold learning and adversarial networks,” in Proceedings of the 39th Annual European Association for Computer Graphics Conference, Delft, The Netherlands, pp.61–64, 2018.(查阅网上资料,未找到本条文献DOI信息,请确认)
    [70]
    M. X. Qin, Z. Y. Zhang, and X. X. Zhou, “Disentangled representation learning GANs for generalized and stable font fusion network,” IET Image Processing, vol.16, no.2, pp.393–406, 2022. doi: 10.1049/ipr2.12355
    [71]
    S. Park, S. Chun, J. Cha, et al., “Multiple heads are better than one: Few-shot font generation with multiple localized experts,” in Proceedings of 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, pp.13880–13889, 2021.
    [72]
    J. Liu, C. K. Gu, J. Wang, et al., “Multi-scale multiclass conditional generative adversarial network for handwritten character generation,” The Journal of Supercomputing, vol.75, no.4, pp.1922–1940, 2019. doi: 10.1007/s11227-017-2218-0
    [73]
    C. Wen, Y. J. Pan, J. Chang, et al., “Handwritten Chinese font generation with collaborative stroke refinement,” in Proceedings of 2021 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, pp.3881–3890, 2021.
    [74]
    H. C. Jiang, G. Y. Yang, K. Z. Huang, et al., “W-Net: One-shot arbitrary-style Chinese character generation with deep neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing, Siem Reap, Cambodia, pp.483–493, 2018.
    [75]
    J. Zhao, X. J. Wu, H. H. Yang, et al., “Calligraphy fonts generation method based on residual generative adversarial network,” Journal of Shaanxi Normal University (Natural Science Edition), vol.49, no.5, pp.85–93, 2021. (in Chinese) doi: 10.15983/j.cnki.jsnu.2021.01.026
    [76]
    Y. L. Miao, H. H. Jia, K. X. Tang, et al., “Chinese calligraphy generation based on residual dense network,” in Proceedings of the 4th International Conference on Intelligent Information Processing, Guilin, China, pp.508–512, 2019.
    [77]
    Z. Y. Zhang, X. X. Zhou, M. X. Qin, et al., “Chinese character style transfer based on multi-scale GAN,” Signal, Image and Video Processing, vol.16, no.2, pp.559–567, 2022. doi: 10.1007/s11760-021-02000-6
    [78]
    P. C. Zhou, Z. P. Zhao, K. Zhang, et al., “An end-to-end model for Chinese calligraphy generation,” Multimedia Tools and Applications, vol.80, no.5, pp.6737–6754, 2021. doi: 10.1007/s11042-020-09709-5
    [79]
    G. B. Sun, Z. J. Zheng, and M. Zhang, “End-to-end rubbing restoration using generative adversarial networks,” arXiv preprint, arXiv: 2205.03743, 2022.
    [80]
    Y. M. Gao and J. Q. Wu, “CalliGAN: Unpaired mutli-chirography Chinese calligraphy image translation,” in Proceedings of the 14th Asian Conference on Computer Vision, Perth, Australia, pp.334–348, 2019.
    [81]
    F. Y. Zhang, Y. Yang, W. X. Huang, et al., “Improving font effect generation based on pyramid style feature,” International Journal of Performability Engineering, vol.16, no.8, pp.1271–1278, 2020. doi: 10.23940/ijpe.20.08.p14.12711278
    [82]
    S. Yang, Z. Y. Wang, Z. W. Wang, et al., “Controllable artistic text style transfer via shape-matching GAN,” in Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), pp.4441–4450, 2019.
    [83]
    S. J. Wu, C. Y. Yang, and J. Y. J. Hsu, “CalliGAN: Style and structure-aware Chinese calligraphy character generator,” arXiv preprint, arXiv: 2005.12500, 2020.
    [84]
    Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol.13, no.4, pp.600–612, 2004. doi: 10.1109/TIP.2003.819861
    [85]
    M. Heusel, H. Ramsauer, T. Unterthiner, et al., “GANs trained by a two time-scale update rule converge to a local Nash equilibrium,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, pp.6629–6640, 2017.(查阅网上资料,未找到本条文献DOI信息,请确认)
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(3)

    Article Metrics

    Article views (552) PDF downloads(62) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return