QIU Wenliang, GAO Xinbo, HAN Bing. Video Saliency Detection via Pairwise Interaction[J]. Chinese Journal of Electronics, 2020, 29(3): 427-436. doi: 10.1049/cje.2020.02.018
Citation: QIU Wenliang, GAO Xinbo, HAN Bing. Video Saliency Detection via Pairwise Interaction[J]. Chinese Journal of Electronics, 2020, 29(3): 427-436. doi: 10.1049/cje.2020.02.018

Video Saliency Detection via Pairwise Interaction

doi: 10.1049/cje.2020.02.018
Funds:  This work is supported by the National High-level Talents Special Support Program (No.CS31117200001), the National Key Research and Development Program of China (No.2016QY01W0200), the National Natural Science Foundation of China (No.61772402, No.61572384, No.61671339, No.U1605252), Shaanxi Key Technologies Research Program (No.2017KW-017), and National Key Research and Development Program of China (No.2016QY01W020).
More Information
  • Corresponding author: GAO Xinbo (corresponding author) received the B.E., M.S., and Ph.D. degrees in signal and information processing from Xidian University, Xi'an, China, in 1994, 1997, and 1999, respectively. He is currently a Professor of Xidian University, and the Director of the State Key Laboratory of Integrated Services Networks, Xi'an, China. His research interests include computer vision and machine learning. (Email:xbgao@mail.xidian.edu.cn)
  • Received Date: 2018-03-22
  • Rev Recd Date: 2018-04-26
  • Publish Date: 2020-05-10
  • We propose a novel video saliency detection method based on pairwise interaction learning in this paper. Different from the traditional video saliency detection methods, which mostly combine spatial and temporal features, we adopt Least squares Conditional random field (LS-CRF) to capture the interaction information of regions within a frame or between video frames. Specifically, dual graph-connection models are built on superpixels structure of each frame for training and testing, respectively. In order to extract the essential scene structure from video sequences, LS-CRF is introduced to learn the background texture, object components and the various relationships between foreground and background regions through the training set, and each region will be distributed an inferred saliency value in testing phase. Benefitting from the learned diverse relations among scene regions, the proposed approach achieves reliable results especially on multiple objects scenes or under highly complicated scenes. Further, we substitute weak saliency maps for pixel-wise annotations in training phase to verify the expansibility and practicability of the proposed method. Extensive quantitative and qualitative experiments on various video sequences demonstrate that the proposed algorithm outperforms conventional saliency detection algorithms.
  • loading
  • L. Itti, C. Koch and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp.1254-1259, 1998.
    J. Harel, C. Koch, P. Perona, et al., “Graph-based visual saliency”, Proc. of Advances in Neural Information Processing Systems, Vancouver, B.C., Canada, pp.545-552, 2006.
    R. Achanta, S. Hemami, F. Estrada, et al., “Frequencytuned salient region detection”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, pp.1597-1604, 2009.
    M.M. Cheng, N.J. Mitra, X. Huang, et al., “Global contrast based salient region detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.37, No.3, pp.569-582, 2015.
    R. Margolin, A. Tal and L. Zelnik-Manor, “What makes a patch distinct?”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, USA, pp.1139-1146, 2013.
    J. Kim, D. Han, Y.W. Tai, et al., “Salient region detection via high-dimensional color transform”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, pp.883-890, 2014.
    E. Rahtu, J. Kannala, M. Salo, et al., “Segmenting salient objects from images and videos”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, San Franciscan, California, USA, pp.366-379, 2010.
    S.H. Zhong, Y. Liu, F. Ren, et al., “Video saliency detection via dynamic consistent spatio-temporal attention modelling”, Proc. of AAAI Conference on Artificial Intelligence, Bellevue, Washington, USA, pp.1063-1069, 2013.
    Y. Li, B. Sheng, L. Ma, et al., “Temporally coherent video saliency using regional dynamic contrast”, IEEE Transactions on Circuits and Systems for Video Technology, Vol.23, No.12, pp.2067-2076, 2013.
    W. Wang, J. Shen and L. Shao, et al., “Consistent video saliency using local gradient flow optimization and global refinement”, IEEE Transactions on Image Processing, Vol.24, No.11, pp.4185-4196, 2015.
    Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues”, Proc. of ACM International Conference on Multimedia, Santa Barbara, California, USA, pp.815-824, 2006.
    H.J. Seo and P. Milanfar, “Static and space-time visual saliency detection by self-resemblance”, Journal of Vision, Vol.9, No.12, pp.1-27, 2009.
    W. Kim, C. Jung and C. Kim, “Spatiotemporal saliency detection and its applications in static and dynamic scenes”, IEEE Transactions on Circuits and Systems for Video Technology, Vol.21, No.4, pp.446-456, 2011.
    C. Yang, L. Zhang, H. Lu, et al., “Saliency detection via graph-based manifold ranking”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, USA, pp.3166-3173, 2013.
    R. Liu, J. Cao, Z. Lin, et al., “Adaptive partial differential equation learning for visual saliency detection”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, pp.3866-3873, 2014.
    R. Achanta, A. Shaji, K. Smith, et al., “SLIC superpixels compared to state-of-the-art superpixel methods”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.34, No.11, pp.2274-2282, 2012.
    T. Liu, Z. Yuan, J. Sun, et al., “Learning to detect a salient object”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.33, No.2, pp.353-367, 2011.
    L. Mai, Y. Niu and F. Liu, “Saliency aggregation: A datadriven approach”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, USA, pp.1131-1138, 2013.
    A. Bulatov and M. Grohe, “The complexity of partition functions”, Theoretical Computer Science, Vol.348, No.2-3, pp.148-186, 2005.
    A. Kolesnikov, M. Guillaumin, V. Ferrari, et al., “Closed-form approximate CRF training for scalable image segmentation”, Proc. of European Conference on Computer Vision, Zurich, Switzerland, pp.550-565, 2014.
    A. Borji, M.M. Cheng, H. Jiang, et al., “Salient object detection: A benchmark”, IEEE Transactions on Image Processing, Vol.24, No.12, pp.5706-5722, 2015.
    R. Margolin, L. Zelnik-Manor and A. Tal, “How to evaluate foreground maps?”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, pp.248-255, 2014.
    K. Fukuchi, K. Miyazato, A. Kimura, et al., “Saliencybased video segmentation with graph cuts and sequentially updated priors”, Proc. of IEEE International Conference on Multimedia and Expo, New York City, USA, pp.638-641, 2009.
    H. Fu, D. Xu, B. Zhang, et al., “Object-based multiple foreground video co-segmentation”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, pp.3166-3173, 2014.
    D. Tsai, M. Flagg, A. Nakazawa, et al., “Motion coherent tracking using multi-label MRF optimization”, International Journal of Computer Vision, Vol.100, No.2, pp.190-202, 2012.
    S. Goferman, L. Zelnik-Manor, A. Tal, et al., “Context-aware saliency detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.34, No.10, pp.1915-1926, 2012.
    F. Perazzi F, P. Kr‘?ahenb‘?uhl, Y. Pritch, et al., “Saliency filters: Contrast based filtering for salient region detection”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Providence, Rhode Island, USA, pp.733-740, 2012.
    Q. Yan, L. Xu, J. Shi, et al., “Hierarchical saliency detection”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Portland, Oregon, USA, pp.1155-1162, 2013.
    Y. Qin, H. Lu, Y. Xu, et al., “Saliency detection via cellular automata”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, pp.110-119, 2015.
    Y. Li, Y. Zhou, J. Yan, et al., “Visual saliency based on conditional entropy”, Proc. of Asian Conference on Computer Vision, Xi’an, Shaanxi, China, pp.246-257, 2009.
    B. Zhou, X. Hou and L. Zhang, “A phase discrepancy analysis of object motion”, Proc. of Asian Conference on Computer Vision, Queenstown, New Zealand, pp.225-238, 2010.
    H. Kim, Y. Kim, J.Y. Sim, et al., “Spatiotemporal saliency detection for video sequences based on random walk with restart”, IEEE Transactions on Image Processing, Vol.24, No.8, pp.2552-2564, 2015.
    W. Wang, J. Shen and F. Porikli, “Saliency-aware geodesic video object segmentation”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, pp.3395-3402, 2015.
    H.J. Seo and P. Milanfar, “Nonparametric bottom-up saliency detection by self-resemblance”, Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Miami, Florida, USA, pp.45-52, 2009.
    C.R. Huang, Y.J. Chang, Z.X. Yang, et al., “Video saliency map detection by dominant camera motion removal”, IEEE Transactions on Circuits and Systems for Video Technology, Vol.24, No.8, pp.1336-1349, 2014.
    X. Tao, W. Zhao, H. Wang, et al., “Salient object detection with spatiotemporal background priors for video”, IEEE Transactions on Image Processing, Vol.27, No.7, pp.3425-3436, 2017.
    Z. Liu, J. Li, L. Ye, et al., “Saliency detection for unconstrained videos using superpixel-level graph and spatiotemporal propagation”, IEEE Transactions on Circuits and Systems for Video Technology, Vol.27, No.12, pp.2527-2542, 2017.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (169) PDF downloads(405) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return