Turn off MathJax
Article Contents
Deqing WANG and Guoqiang HU, “Efficient nonnegative tensor decomposition using alternating direction proximal method of multipliers,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–9, xxxx doi: 10.23919/cje.2023.00.035
Citation: Deqing WANG and Guoqiang HU, “Efficient nonnegative tensor decomposition using alternating direction proximal method of multipliers,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–9, xxxx doi: 10.23919/cje.2023.00.035

Efficient nonnegative tensor decomposition using alternating direction proximal method of multipliers

doi: 10.23919/cje.2023.00.035
More Information
  • Author Bio:

    Deqing WANG received the B.E. degree in automation and the M.E. degree in pattern recognition and intelligent system from Harbin Engineering University, Harbin, China, in 2009 and 2012, respectively. He received his Ph.D. degree in 2019 from the University of Jyväskylä, Jyväskylä, Finland. He was appointed as an assistant engineer and an engineer at Dalian Scientific Test and Control Technology Institute, China Shipbuilding Industry Corporation (CSIC), Dalian, China, in 2012 and 2014, respectively. He is currently an assistant professor at Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China. His research interests include signal processing, machine learning, tensor decomposition, robotics and intelligent systems. (Email: deqing.wang@foxmail.com)

    Guoqiang HU received the B.E. degree and the Ph.D. degree in biomedical engineering from Dalian University of Technology, Dalian, China, in 2015 and 2022, respectively. He was a visiting doctoral student at Harvard Medical School, Boston, MA, USA, from 2018 to 2020. He is currently a lecturer at the College of Artificial Intelligence, Dalian Maritime University, Dalian, China. His research interests include brain signal analysis and processing, independent component analysis, tensor decomposition and artificial intelligence. (Email: guoqiang.hu@dlmu.edu.cn)

  • Corresponding author: Email: deqing.wang@foxmail.com
  • Available Online: 2023-12-13
  • Nonnegative CANDECOMP/PARAFAC (NCP) tensor decomposition is a powerful tool for multiway signal processing. The optimization algorithm alternating direction method of multipliers (ADMM) has become increasingly popular for solving tensor decomposition problems in the block coordinate descent framework. However, the ADMM-based NCP suffers from rank deficiency and slow convergence for some large-scale and highly sparse tensor data. The proximal algorithm is preferred to enhance optimization algorithms and improve convergence properties. In this study, we propose a novel NCP algorithm using the alternating direction proximal method of multipliers (ADPMM) that consists of the proximal algorithm. The proposed NCP algorithm can guarantee convergence and overcome the rank deficiency. Moreover, we implement the proposed NCP using an inexact scheme that alternatively optimizes the subproblems. Each subproblem is optimized by a finite number of inner iterations yielding fast computation speed. Our NCP algorithm is a hybrid of alternating optimization and ADPMM and is named A2DPMM. The experimental results on synthetic and real-world tensors demonstrate the effectiveness and efficiency of our proposed algorithm.
  • 1http://www.svcl.ucsd.edu/projects/anomaly/dataset.html
    2https://fcon_1000.projects.nitrc.org/indi/retro/parkinsons.html
    3The spatial components were plotted using the software REST [34]. REST can be downloaded from http://www.rfmri.org/REST.
  • loading
  • [1]
    A. Cichocki, R. Zdunek, A. H. Phan, et al., Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation. John Wiley & Sons, Hoboken, NJ, USA, 2009.
    [2]
    A. Cichocki, D. Mandic, L. De Lathauwer, et al., “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145–163, 2015. doi: 10.1109/msp.2013.2297439
    [3]
    N. D. Sidiropoulos, L. D. Lathauwer, X. Fu, et al., “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017. doi: 10.1109/tsp.2017.2690524
    [4]
    Y. P. Liu, Tensors for Data Processing. Elsevier, Amsterdam, The Netherlands, 2022.
    [5]
    Y. Y. Xu and W. T. Yin, “A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1758–1789, 2013. doi: 10.1137/120887795
    [6]
    Y. Zhang, G. X. Zhou, Q. B. Zhao, et al., “Fast nonnegative tensor factorization based on accelerated proximal gradient and low-rank approximation,” Neurocomputing, vol. 198 pp. 148–154, 2016. doi: 10.1016/j.neucom.2015.08.122
    [7]
    D. Q. Wang and F. Y. Cong, “An inexact alternating proximal gradient algorithm for nonnegative CP tensor decomposition,” Science China Technological Sciences, vol. 64, no. 9, pp. 1893–1906, 2021. doi: 10.1007/s11431-020-1840-4
    [8]
    D. Q. Wang, Z. Chang, and F. Y. Cong, “Sparse nonnegative tensor decomposition using proximal algorithm and inexact block coordinate descent scheme,” Neural Computing and Applications, vol. 33, no. 24, pp. 17369–17387, 2021. doi: 10.1007/s00521-021-06325-8
    [9]
    J. Kim and H. Park, “Fast nonnegative tensor factorization with an active-set-like method,” in High-Performance Scientific Computing, M. W. Berry, K. A. Gallivan, E. Gallopoulos, et al., Eds. Springer, London, UK, pp. 311–326, 2012.
    [10]
    K. J. Huang, N. D. Sidiropoulos, and A. P. Liavas, “A flexible and efficient algorithmic framework for constrained matrix and tensor factorization,” IEEE Transactions on Signal Processing, vol. 64, no. 19, pp. 5052–5065, 2016. doi: 10.1109/tsp.2016.2576427
    [11]
    S. Boyd, N. Parikh, E. Chu, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. doi: 10.1561/2200000016
    [12]
    Z. C. Lin, H. Li, and C. Fang, Alternating Direction Method of Multipliers for Machine Learning. Springer, Singapore, 2022.
    [13]
    A. P. Liavas and N. D. Sidiropoulos, “Parallel algorithms for constrained tensor factorization via alternating direction method of multipliers,” IEEE Transactions on Signal Processing, vol. 63, no. 20, pp. 5450–5463, 2015. doi: 10.1109/tsp.2015.2454476
    [14]
    M. Roald, C. Schenker, V. D. Calhoun, et al., “An AO-ADMM approach to constraining PARAFAC2 on all modes,” SIAM Journal on Mathematics of Data Science, vol. 4, no. 3, pp. 1191–1222, 2022. doi: 10.1137/21m1450033
    [15]
    C. Schenker, J. E. Cohen, and E. Acar, “A flexible optimization framework for regularized matrix-tensor factorizations with linear couplings,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 3, pp. 506–521, 2021. doi: 10.1109/jstsp.2020.3045848
    [16]
    Q. M. Yao, Y. Q. Wang, B. Han, et al., “Low-rank tensor learning with nonconvex overlapped nuclear norm regularization,” The Journal of Machine Learning Research, vol. 23, no. 1, article no. 136, 2022. doi: 10.5555/3586589.3586725
    [17]
    B. C. Pan, C. D. Li, and H. J. Che, “Nonconvex low-rank tensor approximation with graph and consistent regularizations for multi-view subspace learning,” Neural Networks, vol. 161 pp. 638–658, 2023. doi: 10.1016/j.neunet.2023.02.016
    [18]
    Y. Wang, W. J. Zhang, L. Wu, et al., “Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering,” in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, pp. 2153–2159, 2016.
    [19]
    Y. Wang, L. Wu, X. M. Lin, et al., “Multiview spectral clustering via structured low-rank matrix factorization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 10, pp. 4833–4843, 2018. doi: 10.1109/tnnls.2017.2777489
    [20]
    R. Shefi and M. Teboulle, “Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization,” SIAM Journal on Optimization, vol. 24, no. 1, pp. 269–297, 2014. doi: 10.1137/130910774
    [21]
    N. Parikh, “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 127–239, 2014. doi: 10.1561/2400000003
    [22]
    Z. C. Lin, R. S. Liu, and Z. X. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, pp. 612–620, 2011.
    [23]
    Y. Y. Ouyang, Y. M. Chen, G. H. Lan, et al., “An accelerated linearized alternating direction method of multipliers,” SIAM Journal on Imaging Sciences, vol. 8, no. 1, pp. 644–681, 2015. doi: 10.1137/14095697X
    [24]
    C. Y. Lu, J. S. Feng, S. C. Yan, et al., “A unified alternating direction method of multipliers by majorization minimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 527–541, 2018. doi: 10.1109/tpami.2017.2689021
    [25]
    E. K. Ryu and W. Yin, “ADMM-type methods,” in Large-Scale Convex Optimization: Algorithms & Analyses via Monotone Operators, E. K. Ryu and W. Yin, Eds. Cambridge University Press, Cambridge, UK, pp. 160–189, 2022.
    [26]
    D. Q. Wang, “Extracting meaningful eeg features using constrained tensor decomposition,” Ph. D. Thesis, University of Jyvaskylä, Jyvaskylä, Finland, 2019.
    [27]
    D. P. Bertsekas, Nonlinear Programming, 3rd ed., Athena Scientific, Belmont, Massachusetts, 2016.
    [28]
    A. Cichocki and A. H. Phan, “Fast local algorithms for large scale nonnegative matrix and tensor factorizations,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92-A, no. 3, pp. 708–721, 2009. doi: 10.1587/transfun.e92.a.708
    [29]
    B. W. Bader and T. G. Kolda, “Tensor toolbox for MATLAB, version 3.2. 1,” Available at: https://www.tensortoolbox.org/, 2021. (查阅网上资料,请联系作者补全日期信息) .

    B. W. Bader and T. G. Kolda, “Tensor toolbox for MATLAB, version 3.2. 1,” Available at: https://www.tensortoolbox.org/, 2021. (查阅网上资料,请联系作者补全日期信息).
    [30]
    L. Badea, M. Onu, T. Wu, et al., “Exploring the reproducibility of functional connectivity alterations in parkinson’s disease,” PLoS One, vol. 12, no. 11, article no. e0188196, 2017. doi: 10.1371/journal.pone.0188196
    [31]
    B. T. T. Yeo, F. M. Krienen, J. Sepulcre, et al., “The organization of the human cerebral cortex estimated by intrinsic functional connectivity,” Journal of Neurophysiology, vol. 106, no. 3, pp. 1125–1165, 2011. doi: 10.1152/jn.00338.2011
    [32]
    E. Y. Choi, B. T. T. Yeo, and R. L. Buckner, “The organization of the human striatum estimated by intrinsic functional connectivity,” Journal of Neurophysiology, vol. 108, no. 8, pp. 2242–2263, 2012. doi: 10.1152/jn.00270.2012
    [33]
    G. Q. Hu, D. Q. Wang, S. W. Luo, et al., “Frequency specific co-activation pattern analysis via sparse nonnegative tensor decomposition,” Journal of Neuroscience Methods, vol. 362, artilce no. 109299, 2021. doi: 10.1016/j.jneumeth.2021.109299
    [34]
    X. W. Song, Z. Y. Dong, X. Y. Long, et al., “REST: A toolkit for resting-state functional magnetic resonance imaging data processing,” PLoS One, vol. 6, no. 9, artilce no. e25031, 2011. doi: 10.1371/journal.pone.0025031
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(5)

    Article Metrics

    Article views (267) PDF downloads(38) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return