Citation: | LI Wenshi, “Silent Speech Interface Design Methodology and Case Study,” Chinese Journal of Electronics, vol. 25, no. 1, pp. 88-92, 2016, doi: 10.1049/cje.2016.01.014 |
B. Denby, T. Schultz, K. Honda, et al., “Silent speech interfaces”, Speech Communication, Vol.52, No.4, pp.270-287, 2010.
|
J.S. Brumberg, Alfonso Nieto-Castanon, P.R. Kennedy, et al., “Brain-computer interfaces for speech communication”, Speech Communication, Vol.52, pp.367-379, 2010.
|
M.A.L. Nicolelis, “Actions from thoughts”, Nature, Vol.409, pp.403-407, 2001.
|
J.J. Vidal, “Toward direct brain-computer communication”, Annual Review of Biophysics and Bioengineering, Vol.2, pp.157-180, 1973.
|
M.A. Lebedev and M.A.L. Nicolelis, “Brain-machine interfaces: Past, present and future”, Trends in Neuroscience, Vol.29, No.9, pp.536-546, 2006.
|
Xiaomei Pei, Jeremy Hill and Gerwin Schalk, “Silent communication”, IEEE Pulse, No.1, pp.43-46, 2012.
|
LiWen-Shi, Qian Chong-Yang, Li Lei, et al., “Let ear speak: An auricular frontal-lobe-point's near-infrared spectroscope braincomputer interface”, Journal of Nanjing University (Natural Sciences), Vol.48, No.5, pp.654-660, 2012.
|
L.R. Hochberg and J.P. Donoghue, “Sensors for brain-computer interfaces: Options for turning thought into action”, IEEE Engineering in Medicine and Biology Magazine, No.7, pp.32-38, 2006.
|
E.T. Rolls and A. Treves, “The neural encoding of information in the brain”, Progress in Neurobiology, Vol.95, pp.448-490, 2011.
|
M. Naito, Y. Michioka, K. Ozawa, et al., “A communication means for totally locked-in ALS patients based on changes in cerebral blood volume measured with near-infrared light”, EICE Transactions on Information and Systems, Vol.E90-D, No.7, pp.1028-1037, 2007.
|
C. Herff, D. Heger, F. Putze, et al., “Cross-subject classification of speaking modes using fNIRS”, 19th International Conference on Neural Information Processing (ICONIP 2012), pp.417-424, 2012.
|
Li Whenshi, “Lie detection novel method based on brain neurotransmitter”, Patent, No.200510095198.7, China.
|
T. Oleson, “Auriculotherapy manual: Chinese and western systems of ear acupuncture”, Los Angeles, California: Elsevier Science Ltd., pp.8-18, 2003.
|
R. Sharma, V.I. Pavlvoic and T. Huang, “Toward multimodal human-computer interface”, Proceedings of the IEEE, Vol.86, No.5, pp.853-869, 1998.
|
Unsoo Ha, Yongsu Lee, Hyunki Kim, et al., “A wearable EEGHEG- HRV multimodal system with real-time TES monitoring for mental health”, ISSCC 2015, pp.396-398, 2015.
|
Li Jian-wen, Yu Xiao-ming and Cao Li-jia, “Response of skin to audible signal and skin-hearing aid”, Journal of Clinical Rehabilitative Tissue Engineering Research, Vol.12, No.13, pp.2579- 2582, 2008.
|
M. Wand, M. Janke and T. Schultz, “Tackling speaking mode varieties in EMG-based speech recognition”, IEEE Tansactions on Biomedical Engineering, Vol.61, No.10, pp.2515-2526, 2014.
|
M. Matsumoto and J. Hori, “Classification of silent speech using adaptive collection”, 2013 IEEE Symposium on Computational Intelligence in Rehabilitation and Assistivier Technologies, pp.5-9, 2013.
|
Kinam Kim, “Silicon technologies and solutions for the datadriven world”, ISSCC 2015, pp.8-14, 2015.
|
Jong-Kwan Choi, Jae-Myoung Kim, Gunpil Hwang, et al., “A time-divided spread-spectrum code based 15pW-detectable multi-channel fNIRS IC for portable functional brain imaging”, ISSCC 2015, pp.196-198, 2015.
|
W.S. Li and Y.T. Li, “FOMs of consciousness measurement”, WIT Transactions on Information and Communication Technologies, pp.121-124, 2015.
|
Li Whenshi, Li Lei and Qiang chongyang, “A method, system and lock of speech center decoding”, Patent, No. 201210315559.4, China.
|