Turn off MathJax
Article Contents
Shunmiao ZHANG, Siyuan ZHENG, Degen HUANG, et al., “Enhancing Entity Relationship Extraction in Dialogue Texts using Hypergraph and Heterogeneous Graph,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–14, xxxx doi: 10.23919/cje.2023.00.315
Citation: Shunmiao ZHANG, Siyuan ZHENG, Degen HUANG, et al., “Enhancing Entity Relationship Extraction in Dialogue Texts using Hypergraph and Heterogeneous Graph,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–14, xxxx doi: 10.23919/cje.2023.00.315

Enhancing Entity Relationship Extraction in Dialogue Texts using Hypergraph and Heterogeneous Graph

doi: 10.23919/cje.2023.00.315
More Information
  • Author Bio:

    Shunmiao ZHANG is currently an Associate Professor in School of Computer Science and Mathematics, Fujian University of Technology. His research interests include natural language processing and machine learning. He is now a Member of CCF, ACM. (Email: zhangsm@fjut.edu.cn)

    Siyuan ZHENG received the B.E. degree in Fujian University of Technology in 2019. Now He is an M.S. candidate at Fujian University of Technology. His main research interests include natural language processing and relation extraction. (Email: zhengsy@smail.fjut.edu.cn)

    Degen HUANG received the B.S. degree in Computer Science from Fuzhou University, China, in 1986, and the M.S and Ph.D. degrees in Computer Science from the Dalian University of Technology, China, in 1988 and 2004 respectively. He is currently a Professor with the School of Computer Science, Dalian University of Technology. His research interests include natural language processing and machine translation. He is now a Senior Member of CCF, CIPS, ACM, CAAI and an Associate Editor of Int. J. Advanced Intelligence. (Email: huangdg@dlut.edu.cn)

    Dan LI received the Ph.D. degree in Computer Science at the University of Amsterdam. She is currently a data scientist at Elsevier, focusing on dense retrieval, extreme multi-label classification, and generative AI. She is a member of European Laboratory for Learning and Intelligent Systems(ELLIS). Her research interest includes IR and NLP. (Email: d.li1@elsevier.com)

  • Corresponding author: Email: zhangsm@fjut.edu.cn
  • Received Date: 2023-09-26
  • Accepted Date: 2024-05-31
  • Available Online: 2024-07-01
  • Dialogue relationship extraction (RE) aims to predict relationships between two entities in dialogue. Current approaches to dialogue relationship extraction grapple with long-distance entity relationships in dialogue data as well as complex entity relationships, such as a single entity with multiple types of connections. To address these issues, this paper presents a novel approach for dialogue relationship extraction termed the hypergraphs and heterogeneous graphs model (HG2G). This model introduces a two-tiered structure, comprising dialogue hypergraphs and dialogue heterogeneous graphs, to address the shortcomings of existing methods. The dialogue hypergraph establishes connections between similar nodes using hyper-edges and utilizes hypergraph convolution to capture multi-level features. Simultaneously, the dialogue heterogeneous graph connects nodes and edges of different types, employing heterogeneous graph convolution to aggregate cross-sentence information. Ultimately, the integrated nodes from both graphs capture the semantic nuances inherent in dialogue. Experimental results on the DialogRE dataset demonstrate that the HG2G model outperforms existing state-of-the-art methods.
  • loading
  • [1]
    D. Yu, K. Sun, C. Cardie, et al., “Dialogue-based relation extraction,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Virtual Event, pp. 4927–4940, 2020.
    [2]
    C. Y. Liu, W. B. Sun, W. H. Chao, et al., “Convolution neural network for relation extraction,” in Proceedings of the 9th International Conference on Advanced Data Mining and Applications, Hangzhou, China, pp. 231–242, 2013.
    [3]
    C. N. dos Santos, B. Xiang, and B. W. Zhou, “Classifying relations by ranking with convolutional neural networks,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China, pp. 626–634, 2015.
    [4]
    D. J. Zeng, K. Liu, Y. B. Chen, et al., “Distant supervision for relation extraction via piecewise convolutional neural networks,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 1753–1762, 2015.
    [5]
    T. H. Nguyen and R. Grishman, “Relation extraction: Perspective from convolutional neural networks,” in Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, Denver, CO, USA, pp. 39–48, 2015.
    [6]
    D. X. Zhang and D. Wang, “Relation classification via recurrent neural network,” arXiv preprint, arXiv: 1508.01006, 2015.
    [7]
    N. T. Vu, H. Adel, P. Gupta, et al., “Combining recurrent and convolutional neural networks for relation classification,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, pp. 534–539, 2016.
    [8]
    A. Katiyar and C. Cardie, “Going out on a limb: Joint extraction of entity mentions and relations without dependency trees,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, pp. 917–928, 2017.
    [9]
    P. Zhou, W. Shi, J. Tian, et al., “Attention-based bidirectional long short-term memory networks for relation classification,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, pp. 207–212, 2016.
    [10]
    Y. H. Zhang, V. Zhong, D. Q. Chen, et al., “Position-aware attention and supervised data improve slot filling,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 35–45, 2017.
    [11]
    Y. H. Zhang, P. Qi, and C. D. Manning, “Graph convolution over pruned dependency trees improves relation extraction,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2205–2215, 2018.
    [12]
    H. Zhu, Y. K. Lin, Z. Y. Liu, et al., “Graph neural networks with generated parameters for relation extraction,” in Proceedings of the 57th Conference of the Association for Computational Linguistics, Florence, Italy, pp. 1331–1339, 2019.
    [13]
    Z. J. Guo, Y. Zhang, and W. Lu, “Attention guided graph convolutional networks for relation extraction,” in Proceedings of the 57th Conference of the Association for Computational Linguistics, Florence, Italy, pp. 241–251, 2019.
    [14]
    F. Z. Xue, A. X. Sun, H. Zhang, et al., “GDPNet: Refining latent multi-view graph for relation extraction,” in Proceedings of the 35th AAAI Conference on Artificial Intelligence, Virtual Event, pp. 14194–14202, 2021.
    [15]
    D. Ghosal, N. Majumder, S. Poria, et al., “DialogueGCN: A graph convolutional neural network for emotion recognition in conversation,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, pp. 154–164, 2019.
    [16]
    T. Ishiwatari, Y. Yasuda, T. Miyazaki, et al., “Relation-aware graph attention networks with relational position encodings for emotion recognition in conversations,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Virtual Event, pp. 7360–7370, 2020.
    [17]
    B. Lee and Y. S. Choi, “Graph based network with contextualized representations of turns in dialogue,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, pp. 443-455, 2021.
    [18]
    R. Q. Jia, X. F. Zhou, L. H. Dong, et al., “Hypergraph convolutional network for group recommendation,” in Proceedings of the IEEE International Conference on Data Mining, Auckland, New Zealand, pp. 260–269, 2021.
    [19]
    Y. F. Feng, H. X. You, Z. Z. Zhang, et al., “Hypergraph neural networks,” in Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, pp. 3558–3565, 2019.
    [20]
    N. Yadati, M. Nimishakavi, P. Yadav, et al., “HyperGCN: A new method of training graph convolutional networks on hypergraphs,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, article no. 135, 2019.
    [21]
    K. Z. Ding, J. L. Wang, J. D. Li, et al., “Be more with less: Hypergraph attention networks for inductive text classification,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Virtual Event, pp. 4927–4936, 2020.
    [22]
    L. Yuan, J. Wang, L. C. Yu, et al, “Hierarchical template transformer for fine-grained sentiment controllable generation,” Information Processing & Management, vol. 59, no. 5, article no. 103048, 2022 doi: 10.1016/j.ipm.2022.103048
    [23]
    Y. B. Guo, D. M. Zhou, P. Li, et al, “Context-aware poly(a) signal prediction model via deep spatial–temporal neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 6, pp. 8241–8253, 2024 doi: 10.1109/TNNLS.2022.3226301
    [24]
    S. L. Yang, D. M. Zhou, J. D. Cao, et al, “LightingNet: An integrated learning method for low-light image enhancement,” IEEE Transactions on Computational Imaging, vol. 9, pp. 29–42, 2023 doi: 10.1109/TCI.2023.3240087
    [25]
    H. Yu, K. Y. Huang, Y. Wang, et al, “Lexicon-augmented cross-domain Chinese word segmentation with graph convolutional network,” Chinese Journal of Electronics, vol. 31, no. 5, article no. 949, 2022 doi: 10.1049/cje.2021.00.363
    [26]
    K. Y. Huang, J. X. Cao, Z. Liu, et al, “Word-based method for Chinese part-of-speech via parallel and adversarial network,” Chinese Journal of Electronics, vol. 31, no. 2, pp. 337–344, 2022 doi: 10.1049/cje.2020.00.411
    [27]
    D. G. Huang, J. Zhang, and K. Y. Huang, “Automatic microblog-oriented unknown word recognition with unsupervised method,” Chinese Journal of Electronics, vol. 27, no. 1, pp. 1–8, 2018 doi: 10.1049/cje.2017.11.004
    [28]
    J. Devlin, M. W. Chang, K. Lee, et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, pp. 4171–4186, 2019.
    [29]
    T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.
    [30]
    S. Poria, E. Cambria, D. Hazarika, et al., “Context-dependent sentiment analysis in user-generated videos,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada, pp. 873–883, 2017.
    [31]
    S. Poria, D. Hazarika, N. Majumder, et al., “MELD: A multimodal multi-party dataset for emotion recognition in conversations,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 527–536, 2019.
    [32]
    Y. H. Liu, M. Ott, N. Goyal, et al., “RoBERTa: A robustly optimized BERT pretraining approach,” arXiv preprint, arXiv: 1907.11692, 2019.
    [33]
    L. Qiu, Y. Liang, Y. Z. Zhao, et al., “SocAoG: Incremental graph parsing for social relation inference in dialogues,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Virtual Event, pp. 658–670, 2021.
    [34]
    Y. Zhou and W. S. Lee, “None class ranking loss for document-level relation extraction,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Vienna, Austria, pp. 4538–4544, 2022.
    [35]
    X. F. Bai, L. F. Song, and Y. Zhang, “Semantic-based pre-training for dialogue understanding,” in Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, pp. 592–607, 2022.
    [36]
    Y. Wang, J. Y. Zhang, J. Ma, et al., “Contextualized emotion recognition in conversation as sequence tagging,” in Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Online, pp. 186–195, 2020.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(8)

    Article Metrics

    Article views (145) PDF downloads(12) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return