Citation: | WEN Liang, SHI Haibo, ZHANG Xiaodong, SUN Xin, WEI Xiaochi, WANG Junfeng, CHENG Zhicong, YIN Dawei, WANG Xiaolin, LUO Yingwei, WANG Houfeng. Learning to Combine Answer Boundary Detection and Answer Re-ranking for Phrase-Indexed Question Answering[J]. Chinese Journal of Electronics. doi: 10.1049/cje.2021.00.079 |
[1] |
F. Hill, A. Bordes, S. Chopra, et al., “The goldilocks principle: Reading children's books with explicit memory representations,” Proceedings of the 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, USA, pp.1–13, 2016.
|
[2] |
P. Rajpurkar, J. Zhang, K. Lopyrev, et al., “SQuAD: 100, 000+ questions for machine comprehension of text,” Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA, pp.2383–2392, 2016.
|
[3] |
D. Chen, J. Bolton, and C. D. Manning, “A thorough examination of the CNN/Daily mail reading comprehension task,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, pp.2358–2367, 2016.
|
[4] |
S. Wang and J. Jiang, “Machine comprehension using match-lstm and answer pointer,” Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, pp.1–11, 2017.
|
[5] |
M. Joon Seo, A. Kembhavi, A. Farhadi, et al., “Bidirectional attention flow for machine comprehension,” Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, pp.1–13, 2017.
|
[6] |
C. Xiong, V. Zhong, and R. Socher, “Dynamic coattention networks for question answering,” Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, pp.1–14, 2017.
|
[7] |
Y. Cui, Z. Chen, S. Wei, et al., “Attention-over-attention neural networks for reading comprehension,” Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, pp.593–602, 2017.
|
[8] |
W. Wang, N. Yang, F. Wei, et al., “Gated self-matching networks for reading comprehension and question answering,” Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada, pp.189–198, 2017.
|
[9] |
A. Wei Yu, D. Dohan, M. Luong, et al., “Qanet: Combining local convolution with global self-attention for reading comprehension,” Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, pp.1–16, 2018.
|
[10] |
M. Hu, Y. Peng, Z. Huang et al., “Reinforced mnemonic reader for machine reading comprehension,” Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pp.4099–4106, 2018.
|
[11] |
J. Devlin, M. Chang, K. Lee, et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, pp.4171–4186, 2019
|
[12] |
M. Joon Seo, T. Kwiatkowski, A. P. Parikh, et al., “Phrase-indexed question answering: A new challenge for scalable document comprehension,” Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp.559–564, 2018
|
[13] |
M. Joon Seo, J. Lee, T. Kwiatkowski, et al., “Real-time open-domain question answering with dense-sparse phrase index,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp.4430–4441, 2019
|
[14] |
M. Joshi, D. chen, Y. Liu, et al., “SpanBERT: Improving pre-training by representing and predicting spans,” Transactions of the Association for Computational Linguistics, vol.8, pp.64–77, 2020. doi: 10.1162/tacl_a_00300
|
[15] |
A. Trischler, T. Wang, X. Yuan, et al., “NewsQA: A machine comprehension dataset,” Proceedings of the 2nd Workshop on Representation Learning for NLP, Vancouver, Canada, pp.191–200, 2017.
|
[16] |
A. Fisch, A. Talmor, R. Jia, et al., “MRQA 2019 shared task: evaluating generalization in reading comprehension,” Proceedings of the 2nd Workshop on Machine Reading for Question Answering, Hong Kong, China, pp.1–13, 2019.
|
[17] |
M. E. Peters, M. Neumann, M. Iyyer, et al., “Deep contextualized word representations,” Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, pp.2227–2237, 2018
|
[18] |
G. Lai, Q. Xie, H. Liu, et al., “RACE: Large-scale reading comprehension dataset from examinations,” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp.785–794, 2017.
|
[19] |
P. Rajpurkar, R. Jia, and P. Liang, “Know what you don’t know: Unanswerable questions for SQuAD,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, pp.784–789, 2018.
|
[20] |
S. Reddy, D. Chen, and C. D. Manning, “Coqa: A conversational question answering challenge,” Transactions of the Association for Computational Linguistics, vol.7, pp.249–266, 2019. doi: 10.1162/tacl_a_00266
|
[21] |
Z. Yang, P. Qi, S. Zhang, et al., “HotpotQA: A dataset for diverse, explainable multi-hop question answering,” Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp.2369–2380, 2018.
|
[22] |
S. Salant and J. Berant, “Contextualized word representations for reading comprehension,” Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, pp.554–559, 2018.
|
[23] |
Z. Yang, Z. Dai, Y. Yang, et al., “Xlnet: Generalized autoregressive pretraining for language understanding,” Advances in Neural Information Processing Systems, Vancouver, BC, Canada, pp.5753–5763, 2019.
|
[24] |
S. Wang, M. Yu, X. Guo, et al., “R3: Reinforced reader-ranker for open-domain question answering,” Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, New Orleans, Louisiana, USA, pp.5981–5988, 2018.
|
[25] |
S. Wang, M. Yu, J. Jiang, et al., “Evidence aggregation for answer re-ranking in open-domain question answering,” Proceedings of the 6th International Conference on Learning Representations, ICLR-2018, Vancouver, BC, Canada, pp.1–14, 2018.
|
[26] |
Z. Wang, J. Liu, X. Xiao, et al., “Joint training of candidate extraction and answer selection for reading comprehension,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, pp.1715–1724, 2018
|
[27] |
B. Kratzwald, A. Eigenmann, and S. Feuerriegel, “RankQA: Neural question answering with answer re-ranking,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp.6076–6085, 2019.
|