Turn off MathJax
Article Contents
Jiajie SHI, Zhi YANG, Jiafeng LIU, et al., “Sparse Homogeneous Learning: A New Approach for Sparse Learning,” Chinese Journal of Electronics, vol. 33, no. 4, pp. 1–10, 2024 doi: 10.23919/cje.2023.00.130
Citation: Jiajie SHI, Zhi YANG, Jiafeng LIU, et al., “Sparse Homogeneous Learning: A New Approach for Sparse Learning,” Chinese Journal of Electronics, vol. 33, no. 4, pp. 1–10, 2024 doi: 10.23919/cje.2023.00.130

Sparse Homogeneous Learning: A New Approach for Sparse Learning

doi: 10.23919/cje.2023.00.130
More Information
  • Author Bio:

    Jiajie SHI is a student with School of Instrumentation Science and Opto-electronics Engineering, Beijing Information Science and Technology University, Beijing, China. (Email: begreating@qq.com)

    Zhi YANG received his doctoral degree in Electrical Engineering from University of Connecticut and had his Post doctoral training in the Department of Radiology, University of Michigan. Currently, he is a professor in School of Biomedical Engineering, Capital Medical University, Beijing, China. His research interests include new medical imaging technologies, image processing and analysis, and image-guided therapeutic technologies. He is a member of AAPM, IEEE, and SPIE. He served or serves as the vice president of China Medical Informatics Association (CMIA) and senior member for China Medical Physics Society, Beijing Optical Society. (Email: zhiYang@ccmu.edu.cn)

    Jiafeng LIU is now a associate professor in School of Biomedical Engineering, Capital Medical University, Beijing, China. He received his Ph.D degree in Capital Medical University, China in 2012. My current research interests include Medical Image Computing and Medical. (Email: ccmuljf@ccmu.edu.cn)

    Hongli SHI received the M.S. and Ph.D. degrees from the School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an, China, in 1999 and 2005, respectively. He had been a Postdoctoral scholar with the Department of Electronic Engineering, Fudan University, 2006---2008. He is currently an associate Professor with College of Biomedical Engineering, Capital Medical University, Beijing China. (Email: shl@ccmu.edu.cn)

  • Corresponding author: Email: shl@ccmu.edu.cn
  • Received Date: 2022-03-22
  • Accepted Date: 2022-03-22
  • Available Online: 2024-02-19
  • Many sparse representation problems boil down to address the underdetermined systems of linear equations subject to solution sparsity restriction. Many approaches have been proposed such as sparse Bayesian learning. In order to improve solution sparsity and effectiveness in a more intuitive way, a new approach is proposed, which starts from the general solution of the linear equation system. The general solution is decomposed into the particular and homogeneous solutions, where the homogeneous solution is designed to counteract as many elements of particular solution as possible to make the general solution sparse. First, construct a special system of linear equations to link the homogeneous solution with particular solution, which typically is an inconsistent system. Second, the largest consistent sub-system are extracted from the system so that as many corresponding elements of two solutions as possible cancel each other out. By improving implementation efficiency, the procedure can be accomplished with moderate computational time. The results of extensive experiments for sparse signal recovery and image reconstruction demonstrate the superiority of the proposed approach in terms of sparseness or recovery accuracy with acceptable computational burden.
  • loading
  • [1]
    C. M. Bishop, Pattern Recognition and Machine Learning. Springer, New York, 2006.
    [2]
    M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. doi: 10.1109/JSTSP.2007.910281
    [3]
    M. Sadeghi and M. Babaie-Zadeh, “Iterative sparsification-projection: Fast and robust sparse signal approximation,” IEEE Transactions on Signal Processing, vol. 64, no. 21, pp. 5536–5548, 2016. doi: 10.1109/TSP.2016.2585123
    [4]
    J. Liu and B. D. Rao, “Robust PCA via 0-1 regularization,” IEEE Transactions on Signal Processing, vol. 67, no. 2, pp. 535–549, 2019. doi: 10.1109/TSP.2018.2883924
    [5]
    M. R. Yang and F. De Hoog, “Orthogonal matching pursuit with thresholding and its application in compressive sensing,” IEEE Transactions on Signal Processing, vol. 63, no. 20, pp. 5479–5486, 2015. doi: 10.1109/TSP.2015.2453137
    [6]
    D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. doi: 10.1109/TIT.2006.871582
    [7]
    D. Wipf and S. Nagarajan, “Iterative reweighted 1 and 2 methods for finding sparse solutions,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 317–329, 2010. doi: 10.1109/JSTSP.2010.2042413
    [8]
    Y. Mohsenzadeh and H. Sheikhzadeh, “Gaussian kernel width optimization for sparse Bayesian learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 709–719, 2015. doi: 10.1109/TNNLS.2014.2321134
    [9]
    X. Tan and J. Li, “Computationally efficient sparse Bayesian learning via belief propagation,” IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 2010–2021, 2010. doi: 10.1109/TSP.2010.2040683
    [10]
    D. Shutin, T. Buchgraber, S. R. Kulkarni, et al., “Fast Variational sparse Bayesian learning with automatic relevance determination for superimposed signals,” IEEE Transactions on Signal Processing, vol. 59, no. 12, pp. 6257–6261, 2011. doi: 10.1109/TSP.2011.2168217
    [11]
    J. Fang, L. Z. Zhang, and H. B. Li, “Two-dimensional pattern-coupled sparse Bayesian learning via generalized approximate message passing,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2920–2930, 2016. doi: 10.1109/TIP.2016.2556582
    [12]
    R. Giri and B. Rao, “Type I and Type II Bayesian methods for sparse signal recovery using scale mixtures,” IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3418–3428, 2016. doi: 10.1109/TSP.2016.2546231
    [13]
    M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” The Journal of Machine Learning Research, vol. 1 pp. 211–244, 2001. doi: 10.1162/15324430152748236
    [14]
    L. Luo, J. Yang, B. Zhang, et al., “Nonparametric Bayesian correlated group regression with applications to image classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5330–5344, 2018. doi: 10.1109/TNNLS.2018.2797539
    [15]
    J. L. Li, X. T. Li, H. T. Zhang, et al., “Data-driven discovery of block-oriented nonlinear models using sparse null-subspace methods,” IEEE Transactions on Cybernetics, vol. 52, no. 5, pp. 3794–3804, 2022. doi: 10.1109/TCYB.2020.3015705
    [16]
    C. Y. Li, H. B. Xie, X. H. Fan, et al., “Kernelized sparse Bayesian matrix factorization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 391–404, 2021. doi: 10.1109/TNNLS.2020.2978761
    [17]
    Y. Yuan, X. C. Tang, W. Zhou, et al., “Data driven discovery of cyber physical systems,” Nature Communications, vol. 10, no. 1, article no. 4894, 2019. doi: 10.1038/s41467-019-12490-1
    [18]
    Z. M. Li, Z. Zhang, J. Qin, et al., “Discriminative fisher embedding dictionary learning algorithm for object recognition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 786–800, 2020. doi: 10.1109/TNNLS.2019.2910146
    [19]
    S. H. Gao, I. W. H. Tsang, and L. T. Chia, “Sparse representation with kernels,” IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 423–434, 2013. doi: 10.1109/TIP.2012.2215620
    [20]
    M. S. Asif and J. Romberg, “Fast and accurate algorithms for re-weighted l1-norm minimization,” IEEE Transactions on Signal Processing, vol. 61, no. 23, pp. 5905–5916, 2013. doi: 10.1109/TSP.2013.2279362
    [21]
    W. Zhou, H. T. Zhang, and J. Wang, “An efficient sparse Bayesian learning algorithm based on Gaussian-scale mixtures,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 7, pp. 3065–3078, 2022. doi: 10.1109/TNNLS.2020.3049056
    [22]
    C. R. Rao and S. K. Mitra, Generalized Inverse of Matrices and Its Applications. Wiley, New York, 1971.
    [23]
    A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications. Wiley, New York, 1974.
    [24]
    L. Weruaga and J. Via, “Sparse multivariate Gaussian mixture regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 1098–1108, 2015. doi: 10.1109/TNNLS.2014.2334596
    [25]
    S. H. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2346–2356, 2008. doi: 10.1109/TSP.2007.914345
    [26]
    S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian compressive sensing using Laplace priors,” IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 53–63, 2010. doi: 10.1109/TIP.2009.2032894
    [27]
    M. Al-Shoukairi, P. Schniter, and B. D. Rao, “A GAMP-based low complexity sparse Bayesian learning algorithm,” IEEE Transactions on Signal Processing, vol. 66, no. 2, pp. 294–308, 2018. doi: 10.1109/TSP.2017.2764855
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(3)

    Article Metrics

    Article views (66) PDF downloads(9) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return