Turn off MathJax
Article Contents
Hengmin ZHANG, Jian YANG, Wenli DU, et al., “Enhanced Acceleration for Generalized Nonconvex Low-Rank Matrix Learning,” Chinese Journal of Electronics, vol. 34, no. 1, pp. 1–16, 2025 doi: 10.23919/cje.2023.00.340
Citation: Hengmin ZHANG, Jian YANG, Wenli DU, et al., “Enhanced Acceleration for Generalized Nonconvex Low-Rank Matrix Learning,” Chinese Journal of Electronics, vol. 34, no. 1, pp. 1–16, 2025 doi: 10.23919/cje.2023.00.340

Enhanced Acceleration for Generalized Nonconvex Low-Rank Matrix Learning

doi: 10.23919/cje.2023.00.340
More Information
  • Author Bio:

    Hengmin ZHANG received his Ph.D. degree from the School of Computer Science and Engineering, Nanjing University of Science and Technology (NJUST), Nanjing, China, in 2019. He was a Post-Doctoral Fellow with the School of Information Science and Engineering, East China University of Science and Technology (ECUST), Shanghai, China, and also with a Post-Doctoral Fellow with the PAMI Research Group, Department of Computer and Information Science, University of Macau (UM), Macau, China, from 2019 to 2022. He is currently a Research Fellow with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore. He has published more than 30 technical papers at prominent journals and conferences. His research interests include sparse coding and low-rank matrix recovery, nonconvex optimizations, and large-scale representation learning methods. (Email: hengmin.zhang@ntu.edu.sg, hengmin.zhang@ntu.edu.sg)

    Jian YANG received the Ph.D. degree in Pattern Recognition and Intelligence Systems from Nanjing University of Science and Technology (NJUST), Nanjing, China, in 2002. In 2003, he was a Post-Doctoral Researcher with University of Zaragoza, Zaragoza, Spain. From 2004 to 2006, he was a Post-Doctoral Fellow with the Biometrics Centre, The Hong Kong Polytechnic University, Hong Kong, China. From 2006 to 2007, he was a Post-Doctoral Fellow with the Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA. He is currently a Chang-Jiang Professor with the School of Computer Science and Engineering, NJUST. He has authored more than 400 scientific papers in pattern recognition, computer vision, and machine learning. His papers have been cited more than 41000 times in the Google Scholar. His research interests include pattern recognition, computer vision, and machine learning. Moreover, Prof. Yang is a Fellow of International Association for Pattern Recognition, i.e., IAPR Fellow. He is/was an Associate Editor of Pattern Recognition, Pattern Recognition Letters, IEEE Transactions on Neural Networks and Learning Systems, and Neurocomputing. (Email:csjyang@njust.edu.cn)

    Wenli DU received the B.S. and M.S. degrees in Chemical Process Control from the Dalian University of Technology, Dalian, China, in 1997 and 2000, respectively, and the Ph.D. degree in Control Theory and Control Engineering from East China University of Science and Technology, Shanghai, China, in 2005. She is currently a Professor of the College of Information Science and Engineering and serves as the Dean of Graduate School, East China University of Science and Technology, Shanghai, China, and is also the Vice-director of the Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China. Her research interests include control theory and applications, system modeling, advanced control, and process optimization. (Email: wldu@ecust.edu.cn)

    Bob ZHANG received the Ph.D. degree in Electrical and Computer Engineering from University of Waterloo, Waterloo, ON, Canada, in 2011. He was with the Center for Pattern Recognition and Machine Intelligence and later was a Post-Doctoral Researcher with the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA. He is currently an Associate Professor with the Department of Computer and Information Science, University of Macau, Macau, China. In addition, he is/was also a Technical Committee Member of the IEEE Systems, Man, and Cybernetics Society, and an Associate Editor of IEEE Transactions on Systems, Man, and Cybernetics: Systems, IEEE Transactions on Neural Networks and Learning Systems, Artificial Intelligence Review, and IET Computer Vision. His research interests include biometrics, pattern recognition, feature extraction/detection, and image processing. (Email: bobzhang@um.edu.mo)

    Zhiyuan ZHA received the Ph.D. degree from the School of Electronic Science and Engineering, Nanjing University, Nanjing, China, in 2018. He is currently a Senior Post-Doctoral Research Fellow with Nanyang Technological University, Singapore, Singapore. Dr. Zha was a recipient of the Platinum Best Paper Award and the Best Paper Runner-Up Award at the IEEE International Conference on Multimedia and Expo (ICME) in 2017 and 2020, respectively. He has been an Associate Editor of The Visual Computer since 2023. His research interests include inverse problems in image/video processing, sparse signal representation, and machine learning. (Email:zhiyuan.zha@ntu.edu.sg)

    Bihan WEN received the B.S. degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore, Singapore, in 2012, and the M.S. and Ph.D. degrees in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign, Champaign, IL, USA, in 2015 and 2018, respectively. He is currently a Nanyang Assistant Professor with the School of Electrical and Electronic Engineering, Nanyang Technological University. He was a recipient of the 2016 Yee Fellowship and the 2012 Professional Engineers Board Gold Medal, Singapore. He was also a recipient of the Best Paper Runner Up Award at the IEEE International Conference on Multimedia and Expo in 2020. He has been an Associate Editor of IEEE Transactions on Circuits and Systems for Video Technology since 2022, and an Associate Editor of MDPI Micromachines since 2021. He is a Guest Editor for IEEE Signal Processing Magazine from 2021 to 2023, and a Guest Editor for IEEE Journal of Selected Topics in Signal Processing from 2023 to 2025. His research interests include machine learning, computational imaging, computer vision, image and video processing, and big data applications. (Email: bihan.wen@ntu.edu.sg)

  • Corresponding author: Email: bihan.wen@ntu.edu.sg
  • Received Date: 2023-10-26
  • Accepted Date: 2024-01-10
  • Rev Recd Date: 2023-11-27
  • Available Online: 2024-03-02
  • Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inherent biases and computational burdens, especially when used to relax the rank function, making them less effective and efficient in real-world scenarios. To address these challenges, our research focuses on generalized nonconvex rank regularization problems in robust matrix completion (RMC), low-rank representation (LRR), and robust matrix regression (RMR). We introduce innovative approaches for effective and efficient low-rank matrix learning, grounded in generalized nonconvex rank relaxations inspired by various substitutes for the $\ell_0$-norm relaxed functions. These relaxations allow us to more accurately capture low-rank structures. Our optimization strategy employs a nonconvex and multi-variable alternating direction method of multipliers (ADMM), backed by rigorous theoretical analysis for complexity and convergence. This algorithm iteratively updates blocks of variables, ensuring efficient convergence. Additionally, we incorporate the randomized singular value decomposition technique and/or other acceleration strategies to enhance the computational efficiency of our approach, particularly for large-scale constrained minimization problems. In conclusion, our experimental results across a variety of image vision-related application tasks unequivocally demonstrate the superiority of our proposed methodologies in terms of both efficacy and efficiency when compared to most other related learning methods.
  • 1http://vision.ucsd.edu/leekc/ExtYaleDatabase/ExtYale
    2http://www2.ece.ohio-state.edu/aleix/ARdatabase.html
  • loading
  • [1]
    N. He, R. L. Wang, J. Lyu, et al., “Low-rank combined adaptive sparsifying transform for blind compressed sensing image recovery,” Chinese Journal of Electronics, vol. 29, no. 4, pp. 678–685, 2020. doi: 10.1049/cje.2020.05.014
    [2]
    Z. X. Hu, F. P. Nie, R. Wang, et al., “Low rank regularization: A review,” Neural Networks, vol. 136, pp. 218–232, 2021. doi: 10.1016/j.neunet.2020.09.021
    [3]
    K. W. Tang, J. Zhang, C. S. Zhang, et al., “Unsupervised, supervised and semi-supervised dimensionality reduction by low-rank regression analysis,” Chinese Journal of Electronics, vol. 30, no. 4, pp. 603–610, 2021. doi: 10.1049/cje.2021.05.002
    [4]
    Y. Q. Cai, X. W. Lu, and N. Jiang, “A survey on quantum image processing,” Chinese Journal of Electronics, vol. 27, no. 4, pp. 718–727, 2018. doi: 10.1049/cje.2018.02.012
    [5]
    Z. Zhang, Y. Xu, J. Yang, et al., “A survey of sparse representation: Algorithms and applications,” IEEE Access, vol. 3, pp. 490–530, 2015. doi: 10.1109/ACCESS.2015.2430359
    [6]
    Y. Chen, J. Yang, L. Luo, et al., “Adaptive noise dictionary construction via IRRPCA for face recognition,” Pattern Recognition, vol. 59, pp. 26–41, 2016. doi: 10.1016/j.patcog.2016.02.005
    [7]
    E. J. Candès, X. D. Li, Y. Ma, et al., “Robust principal component analysis?” Journal of ACM, vol. 58, no. 3, article no. 11, 2011. doi: 10.1145/1970392.1970395
    [8]
    E. J. Candes and T. Tao, “The power of convex relaxation: Near-optimal matrix completion,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2053–2080, 2010. doi: 10.1109/TIT.2010.2044061
    [9]
    F. P. Nie, H. Wang, H. Huang, et al., “Joint Schatten-p norm and l p-norm robust matrix completion for missing value recovery,” Knowledge and Information Systems, vol. 42, no. 3, pp. 525–544, 2015. doi: 10.1007/s10115-013-0713-z
    [10]
    G. C. Liu, Z. C. Lin, S. C. Yan, et al., “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013. doi: 10.1109/TPAMI.2012.88
    [11]
    J. C. Xie, J. Yang, J. J. Qian, et al., “Robust nuclear norm-based matrix regression with applications to robust face recognition,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2286–2295, 2017. doi: 10.1109/TIP.2017.2662213
    [12]
    H. M. Zhang, S. Y. Li, J. Qiu, et al., “Efficient and effective nonconvex low-rank subspace clustering via SVT-free operators,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 12, pp. 7515–7529, 2023. doi: 10.1109/TCSVT.2023.3275299
    [13]
    L. Yang, T. K. Pong, and X. J. Chen, “Alternating direction method of multipliers for a class of nonconvex and nonsmooth problems with applications to background/foreground extraction,” SIAM Journal on Imaging Sciences, vol. 10, no. 1, pp. 74–110, 2017. doi: 10.1137/15M1027528
    [14]
    H. M. Zhang, F. Qian, B. Zhang, et al., “Incorporating linear regression problems into an adaptive framework with feasible optimizations,” IEEE Transactions on Multimedia, vol. 25, pp. 4041–4051, 2023. doi: 10.1109/TMM.2022.3171088
    [15]
    J. M. Liu, Y. J. Chen, J. S. Zhang, et al., “Enhancing low-rank subspace clustering by manifold regularization,” IEEE Transactions on Image Processing, vol. 23, no. 9, pp. 4022–4030, 2014. doi: 10.1109/TIP.2014.2343458
    [16]
    Y. Y. Chen, S. Q. Wang, C. Peng, et al., “Generalized nonconvex low-rank tensor approximation for multi-view subspace clustering,” IEEE Transactions on Image Processing, vol. 30, pp. 4022–4035, 2021. doi: 10.1109/TIP.2021.3068646
    [17]
    J. J. Qian, W. K. Wong, H. M. Zhang, et al., “Joint optimal transport with convex regularization for robust image classification,” IEEE Transactions on Cybernetics, vol. 52, no. 3, pp. 1553–1564, 2022. doi: 10.1109/TCYB.2020.2991219
    [18]
    J. J. Qian, S. M. Zhu, W. K. Wong, et al., “Dual robust regression for pattern classification,” Information Sciences, vol. 546, pp. 1014–1029, 2021. doi: 10.1016/j.ins.2020.09.062
    [19]
    C. Gong, H. M. Zhang, J. Yang, et al., “Learning with inadequate and incorrect supervision,” in Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA, pp. 889–894, 2017.
    [20]
    C. X. Gao, N. Y. Wang, Q. Yu, et al., “A feasible nonconvex relaxation approach to feature selection,” in Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, pp. 356–361, 2011.
    [21]
    J. Trzasko and A. Manduca, “Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization,” IEEE Transactions on Medical Imaging, vol. 28, no. 1, pp. 106–121, 2009. doi: 10.1109/TMI.2008.927346
    [22]
    D. Geman and C. D. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Transactions on Image Processing, vol. 4, no. 7, pp. 932–946, 1995. doi: 10.1109/83.392335
    [23]
    H. M. Zhang, C. Gong, J. J. Qian, et al., “Efficient recovery of low-rank matrix via double nonconvex nonsmooth rank minimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 10, pp. 2916–2925, 2019. doi: 10.1109/TNNLS.2019.2900572
    [24]
    S. H. Gu, Q. Xie, D. Y. Meng, et al., “Weighted nuclear norm minimization and its applications to low level vision,” International Journal of Computer Vision, vol. 121, no. 2, pp. 183–208, 2017. doi: 10.1007/s11263-016-0930-5
    [25]
    Y. Hu, D. B. Zhang, J. P. Ye, et al., “Fast and accurate matrix completion via truncated nuclear norm regularization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2117–2130, 2013. doi: 10.1109/TPAMI.2012.271
    [26]
    C. Y. Lu, J. H. Tang, S. C. Yan, et al., “Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 829–839, 2016. doi: 10.1109/TIP.2015.2511584
    [27]
    H. M. Zhang, J. J. Qian, J. B. Gao, et al., “Scalable proximal Jacobian iteration method with global convergence analysis for nonconvex unconstrained composite optimizations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2825–2839, 2019. doi: 10.1109/TNNLS.2018.2885699
    [28]
    J. Yang, L. Luo, J. J. Qian, et al., “Nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 1, pp. 156–171, 2017. doi: 10.1109/TPAMI.2016.2535218
    [29]
    L. Luo, J. Yang, J. J. Qian, et al., “Robust image regression based on the extended matrix variate power exponential distribution of dependent noise,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 9, pp. 2168–2182, 2017. doi: 10.1109/TNNLS.2016.2573644
    [30]
    H. M. Zhang, J. Yang, J. J. Qian, et al., “Efficient image classification via structured low-rank matrix factorization regression,” IEEE Transactions on Information Forensics and Security, vol. 19, pp. 1496–1509, 2024. doi: 10.1109/TIFS.2023.3337717
    [31]
    H. M. Zhang, W. Luo, W. L. Du, et al., “Robust recovery of low rank matrix by nonconvex rank regularization,” in Proceedings of the 11th International Conference on Image and Graphics, Haikou, China, pp. 106–119, 2021.
    [32]
    H. M. Zhang, W. L. Du, Z. M. Li, et al., “Nonconvex rank relaxations based matrix regression for face reconstruction and recognition,” in Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, pp. 2335–2340, 2020.
    [33]
    H. M. Zhang, J. B. Gao, J. J. Qian, et al., “Linear regression problem relaxations solved by nonconvex ADMM with convergence analysis,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 2, pp. 828–838, 2024. doi: 10.1109/TCSVT.2023.3291821
    [34]
    S. Boyd, N. Parikh, E. Chu, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
    [35]
    H. M. Zhang, F. Qian, P. Shi, et al., “Generalized nonconvex nonsmooth low-rank matrix recovery framework with feasible algorithm designs and convergence analysis,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 9, pp. 5342–5353, 2023. doi: 10.1109/TNNLS.2022.3183970
    [36]
    H. M. Zhang, J. Yang, J. C. Xie, et al., “Weighted sparse coding regularized nonconvex matrix regression for robust face recognition,” Information Sciences, vol. 394–395, pp. 1–17, 2017. doi: 10.1016/j.ins.2017.02.020
    [37]
    X. Peng, C. Y. Lu, Z. Yi, et al., “Connections between nuclear-norm and frobenius-norm-based representations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 1, pp. 218–224, 2018. doi: 10.1109/TNNLS.2016.2608834
    [38]
    H. M. Zhang, J. Yang, J. J. Qian, et al., “Nonconvex relaxation based matrix regression for face recognition with structural noise and mixed noise,” Neurocomputing, vol. 269, pp. 188–198, 2017. doi: 10.1016/j.neucom.2016.12.095
    [39]
    S. Magnusson, P. C. Weeraddana, M. G. Rabbat, et al., “On the convergence of alternating direction Lagrangian methods for nonconvex structured optimization problems,” IEEE Transactions on Control of Network Systems, vol. 3, no. 3, pp. 296–309, 2016. doi: 10.1109/TCNS.2015.2476198
    [40]
    D. Y. Yin, G. Q. Wang, B. Xu, et al., “Image deblurring with mixed regularization via the alternating direction method of multipliers,” Journal of Electronic Imaging, vol. 24, no. 4, pp. 043020–043020, 2015. doi: 10.1117/1.JEI.24.4.043020
    [41]
    Q. M. Yao, J. T. Kwok, T. F. Wang, et al., “Large-scale low-rank matrix learning with nonconvex regularizers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 11, pp. 2628–2643, 2019. doi: 10.1109/TPAMI.2018.2858249
    [42]
    K. C. Toh and S. Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pacific Journal of Optimization, vol. 6, no. 3, pp. 615–640, 2010.
    [43]
    H. M. Zhang, J. J. Qian, B. Zhang, et al., “Low-rank matrix recovery via modified Schatten-p norm minimization with convergence guarantees,” IEEE Transactions on Image Processing, vol. 29, pp. 3132–3142, 2020. doi: 10.1109/TIP.2019.2957925
    [44]
    H. M. Zhang, F. Qian, F. H. Shang, et al., “Global convergence guarantees of (A)GIST for a family of nonconvex sparse learning problems,” IEEE Transactions on Cybernetics, vol. 52, no. 5, pp. 3276–3288, 2022. doi: 10.1109/TCYB.2020.3010960
    [45]
    C. Y. Lu, Z. C. Lin, and S. C. Yan, “Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization,” IEEE Transactions on Image Processing, vol. 24, no. 2, pp. 646–654, 2015. doi: 10.1109/TIP.2014.2380155
    [46]
    Z. C. Lin, R. S. Liu, and Z. X. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, pp. 612–620, 2011.
    [47]
    C. Y. Lu, C. B. Zhu, C. Y. Xu, et al., “Generalized singular value thresholding,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, pp. 1805–1811, 2015.
    [48]
    H. Attouch, J. Bolte, and B. F. Svaiter, “Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods,” Mathematical Programming, vol. 137, no. 1–2, pp. 91–129, 2013. doi: 10.1007/s10107-011-0484-9
    [49]
    J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 1–2, pp. 459–494, 2014. doi: 10.1007/s10107-013-0701-9
    [50]
    H. M. Zhang, W. L. Du, X. Q. Liu, et al., “Factored trace lasso based linear regression methods: Optimizations and applications,” in Proceedings of the 5th International Conference on Cognitive Systems and Signal Processing, Zhuhai, China, pp. 121–130, 2021.
    [51]
    F. H. Shang, Y. Y. Liu, H. H. Tong, et al., “Robust bilinear factorization with missing and grossly corrupted observations,” Information Sciences, vol. 307, pp. 53–72, 2015. doi: 10.1016/j.ins.2015.02.026
    [52]
    H. M. Zhang, B. H. Wen, Z. Y. Zha, et al., “Accelerated PALM for nonconvex low-rank matrix recovery with theoretical analysis,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 4, pp. 2304–2317, 2024.
    [53]
    H. Nikaidô, “On von Neumann’s minimax theorem,” Pacific Journal of Mathematics, vol. 4, no. 1, pp. 65–72, 1954. doi: 10.2140/pjm.1954.4.65
    [54]
    C. H. Chen, B. S. He, Y. Y. Ye, et al., “The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent,” Mathematical Programming, vol. 155, no. 1–2, pp. 57–79, 2016. doi: 10.1007/s10107-014-0826-5
    [55]
    J. H. Chen and J. Yang, “Robust subspace segmentation via low-rank representation,” IEEE Transactions on Cybernetics, vol. 44, no. 8, pp. 1432–1445, 2014. doi: 10.1109/TCYB.2013.2286106
    [56]
    A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. doi: 10.1137/080716542
    [57]
    H. M. Zhang, J. Yang, F. H. Shang, et al., “LRR for subspace segmentation via tractable Schatten-p norm minimization and factorization,” IEEE Transactions on Cybernetics, vol. 49, no. 5, pp. 1722–1734, 2019. doi: 10.1109/TCYB.2018.2811764
    [58]
    J. Chen, H. Mao, Z. Wang, et al., “Low-rank representation with adaptive dictionary learning for subspace clustering,” Knowledge-Based Systems, vol. 223, artilce no. 107053, 2021. doi: 10.1016/j.knosys.2021.107053
    [59]
    T. H. Oh, Y. W. Tai, J. C. Bazin, et al., “Partial sum minimization of singular values in robust PCA: Algorithm and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 4, pp. 744–758, 2016. doi: 10.1109/TPAMI.2015.2465956
    [60]
    H. Liang, L. Kang, and J. J. Huang, “A robust low-rank matrix completion based on truncated nuclear norm and Lp-norm,” The Journal of Supercomputing, vol. 78, no. 11, pp. 12950–12972, 2022. doi: 10.1007/s11227-022-04385-8
    [61]
    M. J. Lai, Y. Y. Xu, and W. T. Yin, “Improved iteratively reweighted least squares for unconstrained smoothed l q–minimization,” SIAM Journal on Numerical Analysis, vol. 51, no. 2, pp. 927–957, 2013. doi: 10.1137/110840364
    [62]
    C. Y. Lu, H. Min, Z. Q. Zhao, et al., “Robust and efficient subspace segmentation via least squares regression,” in Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, pp. 347–360, 2012.
    [63]
    H. Hu, Z. C. Lin, J. J. Feng, et al., “Smooth representation clustering,” in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 3834–3841, 2014.
    [64]
    Q. Q. Shen, Y. S. Liang, S. Y. Yi, et al., “Fast universal low rank representation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1262–1272, 2022. doi: 10.1109/TCSVT.2021.3078327
    [65]
    Q. Q. Shen, Y. Y. Chen, Y. S. Liang, et al., “Weighted Schatten p-norm minimization with logarithmic constraint for subspace clustering,” Signal Processing, vol. 198, artilce no. 108568, 2022. doi: 10.1016/j.sigpro.2022.108568
    [66]
    L. Zhang, M. Yang, and X. C. Feng, “Sparse representation or collaborative representation: Which helps face recognition?” in Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, pp. 471–478, 2011.
    [67]
    J. Wen, Z. F. Zhong, Z. Zhang, et al., “Adaptive locality preserving regression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 1, pp. 75–88, 2020. doi: 10.1109/TCSVT.2018.2889727
    [68]
    W. M. Zuo, D. Y. Meng, L. Zhang, et al., “A generalized iterated shrinkage algorithm for non-convex sparse coding,” in Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, pp. 217–224, 2013.
    [69]
    X. L. Sun, Y. Hai, X. J. Zhang, et al., “Adaptive tensor rank approximation for multi-view subspace clustering,” Chinese Journal of Electronics, vol. 32, no. 4, pp. 840–853, 2023. doi: 10.23919/cje.2022.00.180
    [70]
    Y. P. Liu, J. N. Liu, and C. Zhu, “Low-rank tensor train coefficient array estimation for tensor-on-tensor regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 12, pp. 5402–5411, 2020. doi: 10.1109/TNNLS.2020.2967022
    [71]
    Y. P. Liu, Z. Long, H. Y. Huang, et al., “Low CP rank and tucker rank tensor completion for estimating missing components in image data,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 4, pp. 944–954, 2020. doi: 10.1109/TCSVT.2019.2901311
    [72]
    H. M. Zhang, J. Y. Zhao, B. Zhang, et al., “Unified framework for faster clustering via joint Schatten p-norm factorization with optimal mean,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 3, pp. 3012–3026, 2024.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(6)

    Article Metrics

    Article views (436) PDF downloads(50) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return