Citation: | SI Yujing, LI Ta, PAN Jielin, et al., “A Prefix Tree Based n-best List Re-scoring Strategy for Recurrent Neural Network Language Model,” Chinese Journal of Electronics, vol. 23, no. 1, pp. 70-74, 2014, |
R. Rosenfeld,"Two decades of statistical language modeling: Where do we go from here?", Proceedings of the IEEE, Vol.88, No.8, pp.1270-1278, 2000.
|
I. Oparin, M. Sundermeyer, H. Ney, J.L. Gauvain,"Performance analysis of neural networks in combination with n-gram language models", Proceedings of ICASSP12, Kyoto, Japan, pp.5005-5008, 2012.
|
T. Mikolov, M. Karafiat, L. Burget, J. Cernocky, S. Khudanpur,"Recurrent neural network based language model", Eleventh Annual Conference of the International Speech Communication Association, Chiba, Japan, pp.1045-1048, 2010.
|
T. Mikolov, A. Deoras, S. Kombrink, L. Burget, J. Cernocky,"Empirical evaluation and combination of advanced language modeling techniques", Twelfth Annual Conference of the International Speech Communication Association, Florence, Italy, pp.605-608, 2011.
|
S. Kombrink, T. Mikolov, M. Karafiat, L. Burget,"Recurrent neural network based language modeling in meeting recognition", Twelfth Annual Conference of the International Speech Communication Association, Florence, Italy, pp.2877-2880, 2011.
|
H. Schwenk,"Continuous space language models", Computer Speech & Language, Vol.21, No.3, pp.492-518, 2007.
|
T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, S. Khudanpur,"Extensions of recurrent neural network language model", 2011 IEEE International Conference on. Acoustics, Speech and Signal Processing (ICASSP), IEEE, Prague, Czech Republic, pp.5528-5531, 2011.
|
T. Mikolov, A. Deoras, D. Povey, L. Burget,"Strategies for training large scale neural network language models", ASRU 2011, Hawaii, USA, pp.196-201, 2011.
|
K. Chen, W. Bao, H. Chi,"Speed up training of the recurrent neural network based on constrained optimization techniques", Journal of Computer Science and Technology, Vol.11, No.6, pp.581-588, 1996.
|
G. Lecorve, P. Motlicek,"Conversion of recurrent neural network language models to weighted finite state transducers for automatic speech recognition", Eleventh Annual Conference of the International Speech Communication Association, Portland, Oregon, USA, pp.5032-5035, 2012.
|
A. Deoras, T. Mikolov, K. Church,"A fast re-scoring strategy to capture long-distance dependencies", Proceedings of EMNLP, Edinburgh, UK, pp.1116-1127, 2011.
|
Y. Si, T. Li, S. Cai, J. Pan, Y. Yan,"Recurrent neural network language model in mandarin voice input system", 2012 Eighth International Conference on. Natural Computation (ICNC), IEEE, Chongqing, China, pp.270-274, 2012.
|
J. Bilmes, K. Asanovic, C.W. Chin et al.,"Using phipac to speed error back-propagation learning", ICASSP-97, IEEE, Vol.5, pp.4153-4156, 1997.
|
K. Heafield,"Kenlm: Faster and smaller language modelqueries", Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, UK, pp.187-197, 2011.
|
J. Shao, T. Li, Q. Zhang, Q. Zhao, Y. Yan,"A one-pass realtime decoder using memory-efficient state network", IEICE TRANSACTIONS on Information and Systems, Vol.91, No.3, pp.529-537, 2008.
|
A. Stolcke et al.,"Srilm-an extensible language modeling toolkit", Proceedings of the International Conference on Spoken Language Processing, Vol.2, pp.901-904, 2002.
|
M. Boden,"A guide to recurrent neural networks andbackpropagation", Dallas Project, Vol.3, No.2, pp.1-10, 2002.
|