Citation: | TANG Huanling, ZHU Hui, WEI Hongmin, et al., “Representation of Semantic Word Embeddings Based on SLDA and Word2vec Model,” Chinese Journal of Electronics, vol. 32, no. 3, pp. 647-654, 2023, doi: 10.23919/cje.2021.00.113 |
[1] |
W. Y. Dai, G. R. Xue, Q. Yang, et al., “Transferring naive Bayes classifiers for text classification,” in Proceedings of the 22nd Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, Vancouver, BC, Canada, pp.540–545, 2007.
|
[2] |
C. Xing, D. Wang, X. W. Zhang, et al., “Document classification with distributions of word vectors,” 2014 Asia-pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Siem Reap, Cambodia, pp.1–5, 2014.
|
[3] |
C. L. Li, H. R. Wang, Z. Q. Zhang, et al., “Topic modeling for short texts with auxiliary word embeddings,” in Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, Pisa, Italy, pp.165–174, 2016.
|
[4] |
X. F. He, L. Chen, G. C. Chen, et al., “A LDA topic model based collection selection method for distributed information retrieval,” Journal of Chinese Information Processing, vol.31, no.3, pp.125–133, 2017. (in Chinese)
|
[5] |
G. B. Yang, “A novel contextual topic model for query-focused multi-document summarization,” in Proceedings of the 26th International Conference on Tools with Artificial Intelligence, Limassol, Cyprus, pp.576–583, 2014.
|
[6] |
M. Tang, L. Zhu, and X. C. Zou, “Document vector representation based on Word2vec,” Computer Science, vol.43, no.6, pp.214–217, 269, 2016. (in Chinese) doi: 10.11896/j.issn.1002-137X.2016.06.043
|
[7] |
Y. F. He and M. H. Jiang, “Information bottleneck based feature selection in web text categorization,” Journal of Tsinghua University (Science and Technology), vol.50, no.1, pp.45–48, 53, 2010. (in Chinese) doi: 10.16511/j.cnki.qhdxxb.2010.01.027
|
[8] |
D. Q. Nguyen, R. Billingsley, L. Du, et al., “Improving topic models with latent feature word representations,” Transactions of the Association for Computational Linguistics, vol.3, pp.299–313, 2015. doi: 10.1162/tacl_a_00140
|
[9] |
H. L. Tang, H. Zheng, Y. H. Liu, et al., “Tr-SLDA: a transfer topic model for cross-domains,” Acta Electronica Sinica, vol.49, no.3, pp.605–613, 2021. (in Chinese) doi: 10.12263/DZXB.20200210
|
[10] |
Z. S. Harris, “Distributional structure,” WORD, vol.10, no.2-3, pp.146–162, 1954. doi: 10.1080/00437956.1954.11659520
|
[11] |
G. Salton, A. Wong, and C. S. Yang, “A vector space model for automatic indexing,” Communications of the ACM, vol.18, no.11, pp.613–620, 1975. doi: 10.1145/361219.361220
|
[12] |
D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” The Journal of Machine Learning Research, vol.3, pp.993–1022, 2003.
|
[13] |
H. L. Tang, Q. S. Dou, L. P. Yu, et al., “SLDA-TC: A novel text categorization approach based on supervised topic model,” Acta Electronica Sinica, vol.47, no.6, pp.1300–1308, 2019. (in Chinese) doi: 10.3969/j.issn.0372-2112.2019.06.017
|
[14] |
P. D. Turney and P. Pantel, “From frequency to meaning: vector space models of semantics,” Journal of Artificial Intelligence Research, vol.37, pp.141–188, 2010. doi: 10.1613/jair.2934
|
[15] |
Y. Bengio, R. Ducharme, P. Vincent, et al., “A neural probabilistic language model,” The Journal of Machine Learning Research, vol.3, pp.1137–1155, 2003.
|
[16] |
T. Mikolov, I. Sutskever, K. Chen, et al., “Distributed representations of words and phrases and their compositionality,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, Lake Tahoe, NV, USA, pp.3111–3119, 2013.
|
[17] |
Q. V. Le and T. Mikolov, “Distributed representations of sentences and documents,” in Proceedings of the 31st International Conference on Machine Learning, Beijing, China, pp.1188–1196, 2014
|
[18] |
Y. Liu, Z. Y. Liu, T. S. Chua, et al., “Topical word embeddings,” in Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, TX, USA, pp.2418–2424, 2015.
|
[19] |
L. Q. Niu, X. Y. Dai, J. B. Zhang, et al., “Topic2Vec: Learning distributed representations of topics,” 2015 International Conference on Asian Language Processing (IALP), Suzhou, China, pp.193–196, 2015.
|