SUN Xiaoye, MA Liyan, LI Gongyan, “Multi-vision Attention Networks for on-Line Red Jujube Grading,” Chinese Journal of Electronics, vol. 28, no. 6, pp. 1108-1117, 2019, doi: 10.1049/cje.2019.07.014
Citation: SUN Xiaoye, MA Liyan, LI Gongyan, “Multi-vision Attention Networks for on-Line Red Jujube Grading,” Chinese Journal of Electronics, vol. 28, no. 6, pp. 1108-1117, 2019, doi: 10.1049/cje.2019.07.014

Multi-vision Attention Networks for on-Line Red Jujube Grading

doi: 10.1049/cje.2019.07.014
Funds:  This work is supported by National Key R&D Program of China (No.2018YFD0700300).
More Information
  • Corresponding author: MA Liyan (corresponding author) was born in 1983.She received the Ph.D.degree in computer vision from Beijing Jiaotong University,Beijing,China,in 2013.From 2013 to 2018,she has been an assistant professor and associate professor with Institute of Microelectronics of Chinese Academy of Sciences,respectively.She is currently an associate professor with School of Computer Engineering and Sciences,Shanghai University,Shanghai,China.Her current research interests include computer vision,image processing,and deep learning.(Email:liyanma@shu.edu.cn)
  • Received Date: 2018-10-10
  • Rev Recd Date: 2019-05-16
  • Publish Date: 2019-11-10
  • To solve the red jujube classification problem, this paper designs a convolutional neural network model with low computational cost and high classification accuracy. The architecture of the model is inspired by the multi-visual mechanism of the organism and DenseNet. To further improve our model, we add the attention mechanism of SE-Net. We also construct a dataset which contains 23,735 red jujube images captured by a jujube grading system. According to the appearance of the jujube and the characteristics of the grading system, the dataset is divided into four classes:invalid, rotten, wizened and normal. The numerical experiments show that the classification accuracy of our model reaches to 91.89%, which is comparable to DenseNet-121, InceptionV3, InceptionV4, and Inception-ResNet v2. Our model has real-time performance.
  • loading
  • J. Chen et al., "A review of dietary ziziphus jujuba fruit (Jujube):Developing health food supplements for brain protection", Evidence-Based Complementary and Alternative Medicine, 2017.
    J. Blasco, N. Aleixos and E. Molto, "Machine vision system for automatic quality grading of fruit", Biosystems Engineering, Vol.85, No.4, pp.415-423, 2003.
    A. Kamilaris and F. Prenafeta-Boldu, "Deep learning in agriculture:A survey", Computers and Electronics in Agriculture, vol.147, pp.70-90, 2018.
    Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", Nature, Vol.521, No.7553, pp.436-444, 2015.
    A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet classification with deep convolutional neural networks", Communications of the ACM, Vol.60, No.6, pp.84-90, 2017.
    S. Ioffe and C. Szegedy. "Batch normalization:Accelerating deep network training by reducing internal covariate shift", ArXiv e-prints, Vol.1502, 2015.
    K. He, X. Zhang, S. Ren, J. Sun and Ieee. "Delving deep into rectifiers:Surpassing human-level performance on imageNet classification", 2015 Ieee International Conference on Computer Vision, pp.1026-1034, 2015.
    M. Lin, Q. Chen and S. Yan. "Network in network". ArXiv e-prints, Vol.1312, 2013.
    K. He, X. Zhang, S. Ren, J. Sun and Ieee, "Deep residual learning for image recognition", In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, pp.770-778, 2016.
    G. Huang, Z. Liu, L. van der Maaten, K. Q. Weinberger and Ieee, "Densely Connected Convolutional Networks", In 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp.2261-2269, 2017.
    C. Szegedy et al., "Going deeper with convolutions", In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, pp.1-9, 2015.
    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna. "Rethinking the Inception Architecture for Computer Vision", ArXiv e-prints, Vol.1512, 2015.
    C. Szegedy, S. Ioffe, V. Vanhoucke and A. Alemi. "Inception-v4, inception-resNet and the impact of residual connections on learning", ArXiv e-prints, Vol.1602, 2016.
    J. Hu, L. Shen and G. Sun, "Squeeze-and-excitation networks", ArXiv e-prints, Vol.1709, 2017.
    S. W. Sidehabi, A. Suyuti, I. S. Areni and I. Nurtanio, "Classification on passion fruit's ripeness using K-means clustering and artificial neural network", In 2018 International Conference on Information and Communications Technology (ICOIACT), pp.304-309, 2018.
    G. Zeng, "Fruit and vegetables classification system using image saliency and convolutional neural network",In 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), pp.613-617, 2017.
    Z. M. Khaing, Y. Naung and P. H. Htut, "Development of control system for fruit classification based on convolutional neural network",In 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), pp.1805-1807, 2018.
    Z. e. Schmilovitch, A. Hoffman, H. Egozi, R. Ben Zvi, Z. Bernstein and V. Alchanatis, "Maturity determination of fresh dates by near infrared spectrometry", Journal of the Science of Food and Agriculture, Vol.79, No.1, pp.86-90, 1999.
    T. Najeeb and M. Safar, "Dates maturity status and classification using image processing", In 2018 International Conference on Computing Sciences and Engineering (ICCSE), pp.1-6, 2018.
    G. Muhammad, "Automatic date fruit classification by using local texture descriptors and shape-size features", In 2014 European Modelling Symposium, pp.174-179, 2014.
    A. I. Hobani, A. M. Thottam and K. A. Ahmed, "Development of a neural network classifier for date fruit varieties using some physical attributes", King Saud University-Agricultural Research Center, 2003.
    M. Fadel, "Date fruits classification using probabilistic neural networks", Agricultural Engineering International:CIGR Journal, 2007.
    K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition", arXiv preprint arXiv:1409.1556, 2014.
    Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradientbased learning applied to document recognition", Proceedings of the Ieee, Vol.86, No.11, pp.2278-2324, 1998.
    F. Wang et al., "Residual Attention Network for Image Classification", 30th Ieee Conference on Computer Vision and Pattern Recognition, pp.6450-6458, 2017.
    X. Glorot, A. Bordes and Y. Bengio, "Deep sparse rectifier neural networks", In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp.315-323, 2011.
    Y. Yang, Z. Zhong, T. Shen and Z. Lin. "Convolutional neural networks with alternately updated clique". ArXiv e-prints, Vol.1802, 2018.
    V. Sze, Y.-H. Chen, T.-J. Yang and J. S. Emer, "Efficient processing of deep neural networks:A tutorial and survey", Proceedings of the IEEE, Vol.105, No.12, pp.2295-2329, 2017.
    K. He, X. Zhang, S. Ren and J. Sun, "Delving deep into rectifiers:Surpassing human-level performance on imagenet classification", In Proceedings of the IEEE International Conference on Computer Vision, pp.1026-1034, 2015.
    S. J. Reddi, S. Kale and S. Kumar, "On the convergence of adam and beyond", 2018.
    I. Sutskever, J. Martens, G. Dahl and G. Hinton, "On the importance of initialization and momentum in deep learning", In International Conference on Machine Learning, pp.1139-1147, 2013.
    B. Zoph, V. Vasudevan, J. Shlens and Q. V. Le, "Learning transferable architectures for scalable image recognition", arXiv preprint arXiv:1707.07012, Vol.2, No.6, 2017.
    Z. He, M. Yang and H. Liu, "Multi-task joint feature selection for multi-label classification", Chinese Journal of Electronics, Vol.24, No.2, pp.281-287, 2015.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (659) PDF downloads(156) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return