LAI Yuping, PING Yuan, HE Wenda, et al., “Variational Bayesian Inference for Finite Inverted Dirichlet Mixture Model and Its Application to Object Detection,” Chinese Journal of Electronics, vol. 27, no. 3, pp. 603-610, 2018, doi: 10.1049/cje.2018.03.003
Citation: LAI Yuping, PING Yuan, HE Wenda, et al., “Variational Bayesian Inference for Finite Inverted Dirichlet Mixture Model and Its Application to Object Detection,” Chinese Journal of Electronics, vol. 27, no. 3, pp. 603-610, 2018, doi: 10.1049/cje.2018.03.003

Variational Bayesian Inference for Finite Inverted Dirichlet Mixture Model and Its Application to Object Detection

doi: 10.1049/cje.2018.03.003
Funds:  This work is supported by the National Natural Science Foundation of China (No.51335004, No.61363085, No.61303232), the Project of Action Plan Powerful School with Talents in North China University of Technology (No.XN018022), the Project of Science and Technology Innovation Service Capacity Building Project (No.PXM2017-014212-000002), the Program for Science & Technology Innovation Talents in Universities of Henan Province (No.18HASTIT022), and the Foundation of Henan Educational Committee (No.16A520025, No.18A520047).
More Information
  • Corresponding author: ZHANG Xiufeng (corresponding author) received the Ph.D. degree in mechatronics engineering from Harbin Institute of Technology, Harbin, China, in 2004. He has been a senior scientist at National Research Center for Rehabilitation Technical Aids, Beijing, China, since 2014. His research interests include robotics, pattern recognition, and computer vision. (Email:zhangxiufeng@hit.edu.cn)
  • Received Date: 2016-11-07
  • Rev Recd Date: 2017-04-26
  • Publish Date: 2018-05-10
  • As a variant of Finite mixture model (FMM), finite Inverted Dirichlet mixture model (IDMM) can not avoid the conventional challenges, such as how to select the appropriate number of mixture components based on the observed data. Towards easing these issues, we propose a variational inference framework for learning IDMM which has been proved to be an efficient tool for modeling vectors with positive elements. Compared with the conventional Expectation maximization (EM) algorithm commonly used for learning FMM, the proposed approach prevents over-fitting well. Furthermore, it is able to do automatic determination of the number of mixture components and parameters estimation, simultaneously. Experimental results on both synthetic and real data of object detection confirm significant improvements on flexibility and efficiency being achieved.
  • loading
  • G.J. Mclachlan and D. Peel, Finite Mixture Models, Wiley, New York, USA, 2000.
    C.M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, USA, 2006.
    D.A. Reynolds and R.C. Rose, "Robust text-independent speaker identification using gaussian mixture speaker models", IEEE Transactions on Speech and Audio Processing, Vol.3, No.1, pp.72-83, 1995.
    Z. Ma, S. Chatterjee, W. Bastiaan Kleijn, et al., "Dirichlet mixture modeling to estimate an empirical lower bound for LSF quantization", Signal Processing, Vol.104, No.6, pp.291-295, 2014.
    W. Fan, N. Bouguila and D. Ziou, "Variational learning for finite dirichlet mixture models and applications", IEEE Transactions on Neural Networks and Learning Systems, Vol.23, No.5, pp.762-774, 2012.
    Z. Ma, A. Leijon and W.B. Kleijn, "Vector quantization of LSF parameters with a mixture of dirichlet distributions", IEEE Transactions on Audio Speech and Language Processing, Vol.21, No.9, pp.1777-1790, 2013.
    Z. Ma, J. Taghia, W.B. Kleijn, et al., "Line spectral frequencies modeling by a mixture of von mises-fisher distributions", Signal Processing, Vol.114, No.C, pp.219-224, 2015.
    Z. Ma, J.H. Xue, A. Leijon, et al., "Decorrelation of neutral vector variables:Theory and applications.", IEEE Transactions on Neural Networks and Learning Systems, Vol.PP, No.99, pp.1-15, 2017.
    T.M. Nguyen, Q.M. Wu, D. Mukherjee, et al., "A Bayesian bounded asymmetric mixture model with segmentation application", IEEE Journal of Biomedical and Health Informatics, Vol.18, No.1, pp.109-119, 2014.
    Z.R. Yang and M. Zwolinski, "Mutual information theory for adaptive mixture models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, No.4, pp.396-403, 2001.
    M.A.T. Figueiredo and A.K. Jain, " Unsupervised learning of finite mixture models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.24, No.3, pp.381-396, 2002.
    C. Constantinopoulos, M.K. Titsias and A. Likas, "Bayesian feature and model selection for gaussian mixture models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.28, No.6, pp.1013-1018, 2006.
    Z. Ma and A. Leijon, "Bayesian estimation of beta mixture models with variational inference", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.33, No.11, pp.2160-2173, 2011.
    N. Bouguila and D. Ziou, "High-dimensional unsupervised selection and estimation of a finite generalized dirichlet mixture model based on minimum message length", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.29, No.10, pp.1716-1731, 2007.
    W. Fan and N. Bouguila, "Variational learning of finite betaliouville mixture models using component splitting", International Joint Conference on Neural Networks (IJCNN), Dallas, USA, pp.1-8, 2013.
    N. Bouguila, D. Ziou and E. Monga, "Practical Bayesian estimation of a finite beta mixture through gibbs sampling and its applications", Statistics and Computing, Vol.16, No.2, pp.215-225, 2006.
    P.D. Nwald, The Minimum Description Length Principle, MIT Press, USA, 2007.
    H. Akaike, "A new look at the statistical model identification", IEEE Transactions on Automatic Control, Vol.19, No.6, pp.716-723, 1974.
    G. Schwarz, "Estimating the dimension of a model", Annals of Statistics, Vol.6, No.2, pp.461-464, 1978.
    P. Bunch, J. Murphy and S. Godsill, "Bayesian learning of degenerate linear Gaussian state space models using Markov chain Monte carlo", IEEE Transactions on Signal Processing, Vol.64, No.16, pp.4100-4112, 2016.
    M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, et al., "An introduction to variational methods for graphical models", Machine Learning, Vol.37, No.2, pp.183-233, 1999.
    H. Attias, "A variational Bayesian framework for graphical models", Annual Conference on Neural Information Processing Systems (NIPS), Denver, USA, pp.209-215, 2000.
    Z. Ma, A.E. Teschendorff, A. Leijon, et al., "Variational Bayesian matrix factorization for bounded support data", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.37, No.4, pp.876-889, 2015.
    Z. Ma, P.K. Rana, J. Taghia, et al., "Bayesian estimation of Dirichlet mixture model with variational inference", Pattern Recognition, Vol.47, No.9, pp.3143-3157, 2014.
    G.G. Tiao and I. Cuttman, "The inverted dirichlet distribution with applications", Journal of the American Statistical Association, Vol.60, No.311, pp.793-805, 1965.
    T. Bdiri and N. Bouguila, "Positive vectors clustering using inverted Dirichlet finite mixture models", Expert Systems with Applications, Vol.39, No.2, pp.1869-1882, 2012.
    A. Corduneanu and C.M. Bishop, "Variational Bayesian model selection for mixture distributions", International Conference on Artificial Intelligence and Statistics (AISTATS), Key West, USA, pp.27-34, 2001.
    M.D. Hoffman, D.M. Blei and P.R. Cook, "Bayesian nonparametric matrix factorization for recorded music", International Conference on Machine Learning (ICML), Haifa, Israel, pp.439-446, 2010.
    R. Ksantini, D. Ziou, B. Colin, et al., "Weighted pseudometric discriminatory power improvement using a bayesian logistic regression model based on a variational method", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.30, No.2, pp.253-266, 2008.
    G. Parisi, Statistical Field Theory, Addison-Wesley, Redwood City, USA, 1988.
    Z. Liang, H. Wang, D. Tao, et al., "Improving integrality of detected moving objects based on image matting", Chinese Journal of Electronics, Vol.23, No.4, pp.742-746, 2014.
    S. Zhu, D.U. Junping, N. Ren, et al., "Hierarchical-based object detection with improved locality sparse coding", Chinese Journal of Electronics, Vol.25, No.2, pp.290-295, 2016.
    Y. Pang, J. Cao and X. Li, "Learning sampling distributions for efficient object detection", IEEE Transactions on Cybernetics, Vol.47, No.1, pp.117-129, 2016.
    O. Ludwig, D. Delgado, V. Goncalves, et al., "Trainable classifier-fusion schemes:An application to pedestrian detection", International IEEE Conference on Intelligent Transportation Systems (ITSC), St. Louis, USA, pp.1-6, 2009.
    E. Borenstein and S. Ullman, "Learning to segment", European Conference on Computer Vision (ECCV), Prague, Czech Republic, pp.315-328, 2004.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (470) PDF downloads(264) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return