QIN Chao and GAO Xiaoguang, “Spatio-Temporal Generative Adversarial Networks,” Chinese Journal of Electronics, vol. 29, no. 4, pp. 623-631, 2020, doi: 10.1049/cje.2020.04.001
Citation: QIN Chao and GAO Xiaoguang, “Spatio-Temporal Generative Adversarial Networks,” Chinese Journal of Electronics, vol. 29, no. 4, pp. 623-631, 2020, doi: 10.1049/cje.2020.04.001

Spatio-Temporal Generative Adversarial Networks

doi: 10.1049/cje.2020.04.001
Funds:  This work is supported by the National Natural Science Foundation of China (No.61573285).
  • Received Date: 2019-01-18
  • Rev Recd Date: 2019-12-05
  • Publish Date: 2020-07-10
  • We designed a spatiotemporal generative adversarial network which given some initial data and random noise, generates a consecutive sequence of spatiotemporal samples that have a logical relationship. We build spatial discriminators and temporal discriminators to distinguish whether the samples generated by the generator meet the requirements for time and space coherence. The model is trained on the skeletal dataset and the Caltrans Performance Measurement System District 7 dataset. In contrast to traditional Generative adversarial networks (GANs), the proposed spatiotemporal GAN can generate logically coherent samples with the corresponding spatial and temporal features while avoiding mode collapse. In addition, we show that our model can generate different styles of spatiotemporal samples given different random noise inputs. This model will extend the potential range of applications of GANs to areas such as traffic information simulations and multiagent adversarial simulations.
  • loading
  • G.E. Hinton, S. Osindero and Y.W. Teh, “A fast learning algorithm for deep belief nets”, Neural Computation, Vol.18, No.7, pp.1527-1554, 2006.
    K. Bousmalis, N. Silberman, D. Dohan, et al., “Unsupervised pixel-level domain adaptation with generative adversarial networks”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol.1, No.2, Page 7, 2017.
    T. Kim, M. Cha, H. kim, et al., “Learning to discover cross-domain relations with generative adversarial networks”, Proceedings of the 34th International Conference on Machine Learning, Vol.70, pp.1857-1865, 2017.
    K. Lin, D. Li, X. He, et al., “Adversarial ranking for language generation”, Proc. of Advances in Neural Information Processing Systems, pp.3155-3165, 2017.
    C.C. Hsu, H.T. Hwang, Y.C. Wu, et al., “Voice conversion from unaligned corpora using variational autoencoding Wasserstein generative adversarial networks”, arXiv preprint, arXiv:1704.00849, 2017.
    W. Dai, X. Liang, H. Zhang, et al., “Structure correcting adversarial network for chest X-rays organ segmentation”, Patent, 15/925,998, US Patent Application, 2018-9-27.
    M. Mardani, E. Gong, J.Y. Cheng, et al., “Deep generative adversarial networks for compressed sensing automates MRI”, arXiv preprint, arXiv:1706.00051, 2017.
    H. Shi, J. Dong, W. Wang, et al., “SSGAN: Secure steganography based on generative adversarial networks”, in Pacific Rim Conference on Multimedia, Springer, Cham, pp.534-544, 2017.
    D. Volkhonskiy, I. Nazarov, E. Burnaev, et al., “Steganographic generative adversarial networks”, Twelfth International Conference on Machine Vision (ICMV 2019), International Society for Optics and Photonics, Vol.11433, 2020.
    V. den Oord, A. Kalchbrenner, N. Espeholt, et al., “Conditional image generation with pixelcnn decoders”, Advances in Neural Information Processing Systems, pp.4790-4798, 2016.
    A. Oord, N. Kalchbrenner and K. Kavukcuoglu, “Pixel recurrent neural networks”, arXiv preprint, arXiv:1601.06759, 2016.
    K. Wang, C. Gou, Y. Duan, Y. Lin, et al., “Generative adversarial networks: introduction and outlook”, IEEE/CAA Journal of Automatica Sinica, Vol.4, No.4, pp.588-598, 2017.
    I. Goodfellow, J.P. Abadie, M. Mirza, et al., “Generative adversarial nets”, in Advances in Neural Information Processing Systems, pp.2672-2680, 2014.
    P. Smolensky, “Information processing in dynamical systems: Foundations of harmony theory”, Report No. CU-CS-321-86, Dept. of Computer Science, University of Colorado at Boulder, 1986.
    M. Arjovsky, S. Chintala and L. Bottou, “Wasserstein GAN”, arXiv preprint, arXiv:1701.07875, 2017.
    M.Y. Liu, T. Breuel and J. Kautz, “Unsupervised imageto-image translation networks”, in Advances in Neural Information Processing Systems, pp.700-708, 2017.
    S. Nowozin, B. Cseke and R. Tomioka, “f-GAN: Training generative neural samplers using variational divergence minimization”, Advances in Neural Information Processing Systems, pp.271-279, 2016.
    I. Gulrajani, F. Ahmed, M. Arjovsky, et al., “Improved training of Wasserstein GANs”, in Advances in Neural Information Processing Systems, pp.5767-5777, 2017.
    Y. Yu, Z. Gong, P. Zhong, et al., “Unsupervised representation learning with deep convolutional neural network for remote sensing images”, International Conference on Image and Graphics, Springer, Cham, pp.97-108, 2017.
    I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial nets”, in Advances in Neural Information Processing Systems, pp.2672-2680, 2014.
    P. Isola, J.Y. Zhu, T. Zhou, et al., “Image-to-image translation with conditional adversarial networks”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1125-1134, 2017.
    L. Yu, W. Zhang, J. Wang, et al., “SeqGAN: Sequence generative adversarial nets with policy gradient”, in AAAI Conference on Artificial Intelligence, pp.2852-2858, 2017.
    X. Mao, Q. Li, H. Xie, et al., “Least squares generative adversarial networks”, in IEEE International Conference I on Computer Vision (ICCV), pp.2813-2821, 2017.
    C. Szegedy, W. Liu, Y. Jia, et al., “Going deeper with convolutions”, in IEEE Conference on Computer Vision and Pattern Recognition, 2015.
    Bai D, Wang C Q, Zhang B, et al., “CNN feature boosted SeqSLAM for real-time loop closure detection”, Chinese Journal of Electronics, Vol.27, No.3, pp.488-499, 2018
    X. Ma, Z. Tao, Y. Wang, et al., “Long short-term memory neural network for traffic speed prediction using remote microwave sensor data”, Transportation Research Part C: Emerging Technologies, Vol.54, pp.187-197, 2015.
    Y. Zhang, J. Zheng, Y.R. Jiang, et al., “Using highway connections to enable deep small-footprint LSTM-RNNs for speech recognition”, Chinese Journal of Electronics, Vol.28, No.1, pp.120-126, 2019.
    A. Shahroudy, J. Liu, T.T. Ng, et al., “NTU RGB+ D: A large scale dataset for 3D human activity analysis”, in IEEE Conference on Computer Vision and Pattern Recognition, pp.1010-1019, 2016.
    C. Chen, K. Petty, A. Skabardonis, et al., “Freeway performance measurement system: Mining loop detector data”, Transportation Research Record: Journal of the Transportation Research Board, 1748, pp.96-102, 2001.
    D.P. Kingma and J.B. Adam, “A method for stochastic optimization”, arXiv preprint, arXiv:1412.6980, 2014.
    T. Xiao, H. Li, W. Ouyang, et al., “Learning deep feature representations with domain guided dropout for person reidentification”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1249-1258, 2016.
    I. Jindal, N. Matthew and C. Xuewen, “Learning deep networks from noisy labels with dropout regularization”, 2016 IEEE 16th International Conference on Data Mining (ICDM), IEEE, pp.967-972, 2016.
    S. Santurkar, D. Tsipras, A. Ilyas, et al., “How does batch normalization help optimization?”, Advances in Neural Information Processing Systems, pp.2483-2493, 2018.
    W. Jung, D Jung, S. Lee, et al., “Restructuring batch normalization to accelerate CNN training”, arXiv preprint, arXiv:1807.01702, 2018.
    T. Salimans, I.J. Goodfellow, W. Zaremba, et al., “Improved techniques for training GANs”, Neural Information Processing Systems, pp.2234-2242, 2016.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (782) PDF downloads(196) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return