Citation: | LI Yanping, ZHUO Li, SUN Liangliang, ZHANG Hui, LI Xiaoguang, YANG Yang, WEI Wei. Tongue Color Classification in TCM with Noisy Labels via Confident-Learning-Assisted Knowledge Distillation[J]. Chinese Journal of Electronics, 2023, 32(1): 140-150. doi: 10.23919/cje.2022.00.040 |
[1] |
G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint, arXiv: 1503.02531, 2015.
|
[2] |
C. H. Wu, T. C. Chen, Y. C. Hsieh, et al., “A hybrid rule mining approach for cardiovascular disease detection in traditional chinese medicine,” Journal of Intelligent & Fuzzy Systems, vol.36, no.2, pp.861–870, 2019. doi: 10.3233/JIFS-169864
|
[3] |
Y. Wang, Y. Liu, L. Yu, et al., “Research methods about data mining technology in the study medication rule on famous veteran Teran doctors of TCM,” in Proc. of 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, pp.1948–1952, 2018.
|
[4] |
J. Hou, H. Y. Su, B. Yan, et al., “Classification of tongue color based on CNN,” in Proc. of 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), Beijing, China, pp.725–729, 2017.
|
[5] |
L. Chen, B. Wang, Y. Ma, et al., “The retrieval of the medical tongue images based on color analysis,” in Proc. of 2016 11th International Conference on Computer Science & Education (ICCSE), Nagoya, Japan, pp.113–117, 2016.
|
[6] |
Y. Lu, X. Li, L. Zhuo, et al., “Dccn: A deep-color correction network for traditional Chinese medicine tongue images,” in Proc. of 2018 IEEE International Conference on Multimedia & Expo Workshops (ICME Workshops), San Diego, USA, pp.1–6, 2018.
|
[7] |
P. L. Qu, H. Zhang, L. Zhuo, et al., “Automatic analysis of tongue substance color and coating color using sparse representation-based classifier,” in Proc. of 2016 International Conference on Progress in Informatics and Computing (PIC), Shanghai, China, pp.289–294, 2016.
|
[8] |
P. A. Gutiérrez, M. Perez-Ortiz, J. Sanchez-Monedero, et al., “Ordinal regression methods: Survey and experimental study,” IEEE Transactions on Knowledge and Data Engineering, vol.28, no.1, pp.127–146, 2015. doi: 10.1109/TKDE.2015.2457911
|
[9] |
J. Sánchez-Monedero, P. A. Gutiérrez, and M. Pérez-Ortiz, “ORCA: A Matlab/Octave toolbox for ordinal regression,” Journal of Machine Learning Research, vol.20, no.125, pp.1–5, 2019.
|
[10] |
G. Algan and I. Ulusoy, “Image classification with deep learning in the presence of noisy labels: A survey,” Knowledge-Based Systems, vol.215, article no.106771, 2021.
|
[11] |
C. Northcutt, L. Jiang, and I. Chuang, “Confident learning: Estimating uncertainty in dataset labels,” Journal of Artificial Intelligence Research, vol.70, pp.1373–1411, 2021. doi: 10.1613/jair.1.12125
|
[12] |
T. Xiao, T. Xia, Y. Yang, et al., “Learning from massive noisy labeled data for image classification,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, pp.2691–2699, 2015.
|
[13] |
A. Veit, N. Alldrin, G. Chechik, et al., “Learning from noisy large-scale datasets with minimal supervision,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp.6575–6583, 2017.
|
[14] |
S. J. Delany, N. Segata, and B. M. Namee, “Profiling instances in noise reduction,” Knowledge-Based Systems, vol.31, pp.28–40, 2012. doi: 10.1016/j.knosys.2012.01.015
|
[15] |
J. Luengo, S. O. Shim, S. Alshomrani, et al., “CNC-NOS: Class noise cleaning by ensemble filtering and noise scoring,” Knowledge-Based Systems, vol.140, pp.27–49, 2018. doi: 10.1016/j.knosys.2017.10.026
|
[16] |
Y. Li, J. Yang, Y. Song, et al., “Learning from noisy labels with distillation,” in Proc. of IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp.1928–1936, 2017.
|
[17] |
B. Sun, S. Chen, J. Wang, et al., “A robust multi-class AdaBoost algorithm for mislabeled noisy data,” Knowledge-Based Systems, vol.102, pp.87–102, 2016. doi: 10.1016/j.knosys.2016.03.024
|
[18] |
H. Song, M. Kim, D. Park, et al., “Learning from noisy labels with deep neural networks: A survey,” arXiv preprint, arXiv: 2007.08199, 2020.
|
[19] |
N. Manwani and P. S. Sastry, “Noise tolerance under risk minimization,” IEEE Transactions on Cybernetics, vol.43, no.3, pp.1146–1151, 2013. doi: 10.1109/TSMCB.2012.2223460
|
[20] |
A. Ghosh, H. Kumar, and P. S. Sastry, “Robust loss functions under label noise for deep neural networks,” in Proc. of the AAAI Conference on Artificial Intelligence (AAAI), AAAI Press, Palo Alto, CA, USA, vol.31, no.1, DOI: 10.1609/aaai.v31i1.10894, 2017.
|
[21] |
X. Wang, E. Kodirov, Y. Hua, et al., “Improved mean absolute error for learning meaningful patterns from abnormal training data,” arXiv preprint, arXiv: 1903.12141v5, 2019.
|
[22] |
L. P. F. Garcia, A. C. P. L. F. de Carvalho, and A. C. Lorena, “Noise detection in the meta-learning level,” Neurocomputing, vol.176, pp.14–25, 2016. doi: 10.1016/j.neucom.2014.12.100.
|
[23] |
D. Angluin and P. Laird, “Learning from noisy examples,” Machine Learning, vol.2, no.4, pp.343–370, 1988.
|
[24] |
L. P. F. Garcia, J. Lehmann, A. C. P. L. F. de Carvalho, et al., “New label noise injection methods for the evaluation of noise filters,” Knowledge-Based Systems, vol.163, pp.693–704, 2019. doi: 10.1016/j.knosys.2018.09.031.
|
[25] |
G. Algan and I. Ulusoy. “Label noise types and their effects on deep learning,” arXiv preprint, arXiv: 2003.10471, 2020.
|
[26] |
J. H. Cho and B. Hariharan, “On the efficacy of knowledge distillation,” in Proc. of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp.4794–4802, 2019.
|
[27] |
C. Yang, L. Xie, S. Qiao, et al., “Training deep neural networks in generations: A more tolerant teacher educates better students,” in Proc. of the AAAI Conference on Artificial Intelligence (AAAI), AAAI Press, Palo Alto, CA, USA, vol.33, no.01, DOI: 10.1609/aaai.v33i01.33015628, 2019.
|
[28] |
L. Chen, H. Zhang, J. Xiao, et al., “SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning,” in Proc. of Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp.6298–6306, 2017.
|
[29] |
S. Woo, J. Park, J. Y. Lee, et al., “CBAM: Convolutional block attention module,” in Proc. of the European Conf. on Computer Vision, Munich, Germany, pp.3–19, 2018.
|
[30] |
N. Ma, X. Zhang, M. Liu, et al., “Activate or not: Learning customized activation,” in Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp.8032–8042, 2021.
|
[31] |
Z. Shen, Z. He, and X. Xue, “MEAL: Multi-model ensemble via adversarial learning,” in Proc. of the AAAI Conf. on Artificial Intelligence, AAAI Press, Palo Alto, CA, USA, vol.33, no.1, DOI: 10.1609/aaai.v33i01.33014886, 2019.
|
[32] |
L. S. Shen, Y. H. Cai, and X. F. Zhang, Collection and Analysis of Tongue Picture in Traditional Chinese Medicine, Beijing University of Technology Press, Beijing, China, pp.26–30, 2007. (in Chinese)
|
[33] |
C. Szegedy, V. Vanhoucke, S. Ioffe, et al., “Rethinking the inception architecture for computer vision,” in Proc. of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp.2818–2826, 2016.
|
[34] |
S. Reed, H. Lee, D. Anguelov, et al., “Training deep neural networks on noisy labels with bootstrapping,” arXiv preprint, arXiv: 1412.6596, 2014.
|
[35] |
Z. Zhang and M. R. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” in Proc. of 32nd Conference on Neural Information Processing Systems, Montréal, Canada, pp.8792–8802, 2018.
|
[36] |
K. Yi and J. Wu, “Probabilistic end-to-end noise correction for learning with noisy labels,” in Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp.7017–7025, 2019.
|
[37] |
B. Han, Q. Yao, X. Yu, et al., “Co-teaching: Robust training of deep neural networks with extremely noisy labels,” arXiv preprint, arXiv: 1804.06872, 2018.
|
[38] |
X. Yu, B. Han, J. Yao, et al., “How does disagreement help generalization against label corruption?,” in Proc. of International Conference on Machine Learning (ICML), Long Beach, CA, USA, pp.7164–7173, 2019.
|