Citation: | WANG Shuliang, CHI Hehua, YUAN Ziqiang, GENG Jing. Emotion Recognition Using Cloud Model[J]. Chinese Journal of Electronics, 2019, 28(3): 470-474. DOI: 10.1049/cje.2018.09.020 |
L.C. De Silva, T. Miyasato and R. Nakatsu, "Facial emotion recognition using multi-modal information", In Proceedings of IEEE International Conference on In Information, Communications and Signal Processing, ICICS, Beijing, China, Vol.1, pp.397-401, 1997.
|
K. Durand, M. Gallay, A. Seigneuric, et al., "The development of facial emotion recognition:The role of configural information", Journal of Experimental Child Psychology, Vol.97, No.1, pp.14-27, 2007.
|
C.D. Kashyap and P.R. Vishnu, "Facial emotion recognition", International Journal of Engineering and Future Technology, Vol.7, No.7, pp.18-29, 2016.
|
O. Russakovsky, J. Deng, H. Su, et al., "ImageNet large scale visual recognition challenge", International Journal of Computer Vision, Vol.115, No.3, pp.211-252, 2015.
|
Beymer DJ, "Face recognition under varying pose", In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, Washington, USA, pp.756-761, 1994.
|
G. Ralph, B. Simon, M. Iain, et al., Face Recognition across Pose and Illumination, Springer-Verlag, Berlin, Germany, 2004.
|
R. Jenkins and A.M. Burton, "Stable face representations", Philosophical Transactions ofthe Royal Society B Biological Sciences, Vol.366, No.1571, pp.1671-1683, 2011.
|
W. Zhao, R. Chellappa, A. Rosenfeld, et al., "Face recognition:A literature survey", ACM Computing Surveys, Vol.35, No.4, pp.399-458, 2003.
|
Zhou Q, Shafiq U R, Zhou Y, et al., "Face recognition using dense sift feature alignment", Chinese Journal of Electronics, Vol.25, No.6, pp.1034-1039, 2016.
|
L. Wang, Y. Liang, W. Cai, et al., "Failure detection and correction for appearance based facial tracking", Chinese Journal of Electronics, Vol.24, No.1, pp.20-25, 2015.
|
Wang S, Yuan H, Cao B, et al., "Facial data field", Chinese Journal of Electronics, Vol.24, No.4, pp.667-673, 2015.
|
D. Li, C. Liu and W. Gan, "A new cognitive model:cloud model", International Journal of Intelligent Systems, Vol.24, No.3, pp.357-375, 2009
|
S L Wang and H N. Yuan, "View-angle of spatial data mining", Lecture Notes in Artificial Intelligence, Vol.4093, No.5, pp.1065-1076, 2006.
|
C. Szegedy, S. Ioffe, V. Vanhoucke, et al., "Inception-v4, Inception-ResNet and the impact of residual connections on learning", In Proceedings of AAAI, San Francisco, California, USA, pp.4278-4284, 2017.
|
D.R. Li, S.L. Wang and D.Y. Li, Spatial Data Mining:Theory and Application, Springer, Berlin, Germany, pp.187-201, 2015.
|
J.B. Wu, H.H. Chi and L.H. Chi, "A cloud modelbased approach for facial expression synthesis", Journal of Multimedia, Vol.6, No.2, pp.217-224, 2011
|
H.H. Chi, L.H. Chi, M. Fang, et al., "Facial expression recognition based on cloud model", The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.38, Part Ⅱ, pp.124-128, 2010.
|
M JLyons, S Akamatsu, M Kamachi, et al., "The japanese female facial expression (JAFFE) database", In Proceedings of The Third International Conference on Automatic Face and Gesture Recognition, Nara, Japan, pp.14-16, 1998.
|
PLucey, J F Cohn, T Kanade, et al., "The extended cohnkanade dataset (CK+):A complete dataset for action unit and emotion-specified expression", In Proceedings of Computer Vision and Pattern Recognition Workshop on Human-Communicative Behavior, San Francisco, California, USA, pp.94-101, 2010.
|
K. He, X. Zhang, S. Ren, et al., "Deep residual learning for image recognition", In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp.770-778, 2016.
|
S. Srinivas, R.K. Sarvadevabhatla, K.R. Mopuri, et al., "A taxonomy of deep convolutional neural nets for computer vision. frontiers in robotics and AI, Vol.2, AritcleID 36, pp.1-13, 2016
|
S.L. Wang, H.H. Chi, H.N. Yuan, et al., "Extraction and representation of common feature from uncertain facial expressions with cloud model", Environmental Science and Pollution Research, Vol.24, No.36, pp.27778-27787, 2017.
|
1. | Wang, S., Zhong, L., Fu, Y. et al. UFace: Your Smartphone Can "Hear"Your Facial Expression!. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2024, 8(1): 22. DOI:10.1145/3643546 |
2. | Tiwari, S., Pandey, K., Sharma, V. et al. Machine and Deep Learning Technique for Depression Detection Using EEG Data. Lecture Notes in Electrical Engineering, 2023. DOI:10.1007/978-981-19-8865-3_34 |
3. | Zhou, Y., Dai, P., Zhao, Z. et al. The Influence of Urban Green Space Soundscape on the Changes of Citizens’ Emotion: A Case Study of Beijing Urban Parks. Forests, 2022, 13(11): 1928. DOI:10.3390/f13111928 |
4. | Xu, X., Xie, J., Wang, H. et al. Online education satisfaction assessment based on cloud model and fuzzy TOPSIS. Applied Intelligence, 2022, 52(12): 13659-13674. DOI:10.1007/s10489-022-03289-7 |
5. | Wang, C., Zheng, L. AI-Based Publicity Strategies for Medical Colleges: A Case Study of Healthcare Analysis. Frontiers in Public Health, 2022. DOI:10.3389/fpubh.2021.832568 |
6. | Zhao, J., Tian, J., Meng, F. et al. Safety assessment method for storage tank farm based on the combination of structure entropy weight method and cloud model. Journal of Loss Prevention in the Process Industries, 2022. DOI:10.1016/j.jlp.2021.104709 |
7. | Guan, S., Li, J., Wang, F. et al. Discriminating three motor imagery states of the same joint for brain-computer interface. PeerJ, 2021. DOI:10.7717/peerj.12027 |
8. | Nenggan, Z., Qian, M., Xuefei, W. et al. A Simplified Computational Model of Mushroom Body for Tethered Bees' Abdominal Swing Behavior Induced by Optic Flow. Chinese Journal of Electronics, 2021, 30(2): 296-302. DOI:10.1049/cje.2021.01.001 |
9. | Guo, W., Zhang, F., Wu, Z. et al. Confidence Skewing Problem and Its Correction Method in Mimic Arbitration Mechanism. Chinese Journal of Electronics, 2020, 29(3): 547-553. DOI:10.1049/cje.2020.03.010 |
10. | Zhang, H.. Expression-eeg based collaborative multimodal emotion recognition using deep autoencoder. IEEE Access, 2020. DOI:10.1109/ACCESS.2020.3021994 |