ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2020, Vol. 57 ›› Issue (1): 159-174.doi: 10.7544/issn1000-1239.2020.20190042

Previous Articles     Next Articles

Causal Relation Extraction Based on Graph Attention Networks

Xu Jinghang1, Zuo Wanli1,2, Liang Shining1, Wang Ying1,2   

  1. 1(College of Computer Science and Technology, Jilin University, Changchun 130012);2(Key Laboratory of Symbol Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012)
  • Online:2020-01-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China(61976103, 61872161), the Project of Technical Tackle-key-problem of Jilin Province of China(20190302029GX), the Natural Science Foundation of Jilin Province of China(20180101330JC, 2018101328JC), and the Project of the Development and Reform Commission of Jilin Province (2019C053-8).

Abstract: Causality represents a kind of correlation between cause and effect, where the happening of cause will leads to the happening of effect. As the most important type of relationship between entities, causality plays a vital role in many fields such as automatic reasoning and scenario generation. Therefore, extracting causal relation becomes a basic task in natural language processing and text mining. Different from traditional text classification methods or relation extraction methods, this paper proposes a sequence labeling method to extract causal entity in text and identify direction of causality, without relying on feature engineering or causal background knowledge. The main contributions of this paper can be summarized as follows: 1) we extend syntactic dependency tree to the syntactic dependency graph, adopt graph attention networks in natural language processing, and introduce the concept of S-GAT(graph attention network based on syntactic dependency graph); 2) Bi-LSTM+CRF+S-GAT model for causal extraction is proposed, which generates causal label of each word in sentence based on input word vectors; 3) SemEval data set is modified and extended, and rules are defined to relabel experimental data with an aim of overcoming defects of the original labeling method. Extensive experiments are conducted on the expanded SemEval dataset, which shows that our model achieves 0.064 improvement over state-of-the-art model Bi-LSTM+CRF+self-ATT in terms of prediction accuracy.

Key words: causal relation extraction, graph attention networks (GATs), sequence labeling, syntactic dependency graph, bidirectional long short-term memory (Bi-LSTM)

CLC Number: