• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Xiao Mengnan, He Ruifang, Ma Jinsong. Event Detection Based on Hierarchical Latent Semantic-Driven Network[J]. Journal of Computer Research and Development, 2024, 61(1): 184-195. DOI: 10.7544/issn1000-1239.202220447
Citation: Xiao Mengnan, He Ruifang, Ma Jinsong. Event Detection Based on Hierarchical Latent Semantic-Driven Network[J]. Journal of Computer Research and Development, 2024, 61(1): 184-195. DOI: 10.7544/issn1000-1239.202220447

Event Detection Based on Hierarchical Latent Semantic-Driven Network

Funds: This work was supported by the National Natural Science Foundation of China (61976154) and the National Key Research and Development Program of China (2019YFC1521200).
More Information
  • Author Bio:

    Xiao Mengnan: born in 1997. Master. His main research interests include natural language processing and event extraction

    He Ruifang: born in 1979. PhD, professor, PhD supervisor. Senior member of CCF. Her main research interests include natural language processing,social media mining,and machine learning

    Ma Jinsong: born in 1997. Master. His main research interests include natural language processing and event extraction

  • Received Date: May 27, 2022
  • Revised Date: January 03, 2023
  • Available Online: June 25, 2023
  • Event detection aims to detect triggers in sentences and classify them into pre-defined event types. The key factors lie in appropriately representing triggers. Existing representation-based methods learn the semantic representation of candidate triggers through complex deep neural networks to improve the performance of models. However, these methods ignore two important problems: 1) affected by sentence context, the same trigger can trigger different event types; 2) due to the diversity of natural language expression, different triggers can trigger the same event type. Inspired by hidden variables in the variational auto-encoder (VAE) and hierarchical structure in other natural language processing (NLP) tasks, we propose a hierarchical latent semantic-driven network (HLSD) for event detection to address the above two problems through latent semantic information of sentences and words. The model reduces the dimension from the text representation space to the new latent semantic space and explores the more essential influence information in the macro and micro context of events. Firstly, we get the representation of a sentence and the words through BERT. Secondly, a dual latent semantic mechanism is designed, and VAE is used to mine the latent semantic information at the sentence and word levels. Finally, from the perspective of different granularity contexts, a hierarchical structure from coarse to fine is proposed to make full use of the latent semantic information of sentences and words, to improve the performance of the model. The experimental results on ACE2005 corpus show that the F1 performance of the proposed method achieves 77.9%. In addition, we quantitatively analyze the above two problems in the experiment, which proves the effectiveness of our method.

  • [1]
    Zhang Hongming, Liu Xin, Pan Haojie, et al. ASER: A large-scale eventuality knowledge graph [C] //Proc of the 29th Int World Wide Web Conf. New York: ACM, 2020: 201−211
    [2]
    Glavas G, Snajder J. Event graphs for information retrieval and multi-document summarization [J]. Journal of Expert Systems with Applications, 2014, 41(15): 6904−6916
    [3]
    Eisenberg J, Sheriff M. Automatic extraction of personal events from dialogue [C] //Proc of the 1st Joint Workshop on Narrative Understanding, Storylines, and Events. Stroudsburg, PA: ACL, 2020: 63−71
    [4]
    Ahn D. The stages of event extraction [C] //Proc of the Workshop on Annotating and Reasoning about Time and Events. Stroudsburg, PA: ACL, 2006: 1–8
    [5]
    Ji Heng, Grishman R. Refining event extraction through cross-document inference [C] //Proc of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2008: 254–262
    [6]
    Liao Shasha, Grishman R. Using document level cross-event inference to improve event extraction [C] //Proc of the 48th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2010: 789–797
    [7]
    Hong Yu, Zhang Jianfeng, Ma Bin, et al. Using cross-entity inference to improve event extraction [C] //Proc of the 49th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2011: 1127–1136
    [8]
    Li Qi, Ji Heng, Huang Liang. Joint event extraction via structured prediction with global features [C] //Proc of the 51st Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2013: 73–82
    [9]
    Liu Shulin, Liu Kang, He Shizhu, et al. A probabilistic soft logic based approach to exploiting latent and global information in event classification [C] //Proc of the 30th AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2016: 2993–2999
    [10]
    Chen Yubo, Xu Liheng, Liu Kang, et al. Event extraction via dynamic multi-pooling convolutional neural networks [C] //Proc of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2015: 167–176
    [11]
    Nguyen T H, Grishman R. Modeling skip-grams for event detection with convolution neural networks [C] //Proc of the 2016 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016: 886–891
    [12]
    Nguyen T H, Grishman R. Event detection and domain adaptation with convolution neural networks [C] //Proc of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2015: 365–371
    [13]
    Nguyen T H, Cho K, Grishman R. Joint event extraction via recurrent neural networks [C] //Proc of the 15th Annual Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2016: 300–309
    [14]
    Feng Xiaocheng, Huang Lifu, Tang Duyu, et al. A language-independent neural network for event extraction [C] //Proc of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2016: 66–71
    [15]
    Duan Shaoyang, He Ruifang, Zhao Wenli. Exploiting document level information to improve event detection via recurrent neural networks [C] //Proc of the 8th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2017: 351–361
    [16]
    Liu Jian, Chen Yubo, Liu Kang. Exploiting the ground-truth: An adversarial imitation based knowledge distillation approach for event detection [C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2019: 6754−6761
    [17]
    Yang Sen, Feng Dawei, Qiao Linbo, et al. Exploring pre-trained language models for event extraction and generation [C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 5284−5294
    [18]
    Lu Yaojie, Lin Hongyu, Han Xianpei, et al. Distilling discrimination and generalization knowledge for event detection via delta-representation learning [C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4366−4376
    [19]
    Zhao Yue, Jin Xiaolong, Wang Yuanzhuo, et al. Document embedding enhanced event detection with hierarchical and supervised attention [C] //Proc of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2018: 414−419
    [20]
    Chen Yubo, Yang Hang, Liu Kang, et al. Collective event detection via a hierarchical and bias tagging networks with gated multi-level attention mechanisms [C] //Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 1267−1276
    [21]
    Nguyen T H, Grishman R. Graph convolutional networks with argument-aware pooling for event detection [C] //Proc of the 32nd AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2018: 5900–5907
    [22]
    Liu Jian, Chen Yubo, Liu Kang, et al. Event extraction as machine reading comprehension [C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 1641−1651
    [23]
    Lai V D, Nguyen T N, Nguyen T H. Event detection: Gate diversity and syntactic importance scores for graph convolution neural networks [C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 5405−5411
    [24]
    Liu Shulin, Chen Yubo, He Shizhu, et al. Leveraging FrameNet to improve automatic event detection [C] //Proc of the 54th Annual Meeting of the Association for Computational Linguistic. Stroudsburg, PA: ACL, 2016: 2134–2143
    [25]
    Chen Yubo, Liu Shulin, Zhang Xiang, et al. Automatically labeled data generation for large scale event extraction [C] //Proc of the 55th Annual Meeting of the Association for Computational Linguistic. Stroudsburg, PA: ACL, 2017: 409–419
    [26]
    Liu Jian, Chen Yubo, Liu Kang, et al. Event detection via gated multilingual attention mechanism [C] //Proc of the 32nd AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2018: 4865–4872
    [27]
    Wang Xiaozhi, Han Xu, Liu Zhiyuan, et al. Adversarial training for weakly supervised event detection [C] //Proc of the 17th Annual Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 998−1008
    [28]
    Tong Meihan, Xu Bin, Wang Shuai, et al. Improving event detection via open-domain trigger knowledge [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 5887−5897
    [29]
    Wang Ziqi, Wang Xiaozhi, Han Xu, et al. CLEVE: Contrastive pre-training for event extraction [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 6283−6297
    [30]
    Kingma D P , Welling M . Auto-encoding variational Bayes [C/OL] //Proc of the 2nd Int Conf of Learning Representation. Ithaca, NY: Cornell University, 2014[2022-12-16].https://openreview.net/forum?id=33X9fd2−9FyZd
    [31]
    Miao Yishu, Grefenstette E, Blunsom P. Discovering discrete latent topics with neural variational inference [C] //Proc of the 34th Int Conf on Machine Learning. New York: ACM, 2017: 2410−2419
    [32]
    Srivastava A, Sutton C. Autoencoding variational inference for topic models [C/OL] //Proc of the 5th Int Conf of Learning Representation. Ithaca, NY: Cornell University, 2017[2022-12-16].https://openreview.net/forum?id=BybtVK9lg
    [33]
    Xu Sheng, Li Peifeng, Kong Fang, et al. Topic tensor network for implicit discourse relation recognition in Chinese [C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 608−618
    [34]
    Yang Zichao, Yang Diyi, Dyer C, et al. Hierarchical attention networks for document classification [C] //Proc of the 15th Annual Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2016: 1480−1489
    [35]
    Xiong Tengke, Manggala P. Hierarchical classification with hierarchical attention networks [C/OL] //Proc of the 24th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2018[2022-12-16].https://www.kdd.org/kdd2018/files/deep-learning-day/DLDay18_paper_47.pdf
    [36]
    Pappas N, Popescu-Belis A. Multilingual hierarchical attention networks for document classification [C] //Proc of the 8th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2017: 1015−1025
    [37]
    Mikolov T, Sutskever I, Chen Kai, et al. Distributed representations of words and phrases and their compositionality [C] //Proc of the 27th Advances in Neural Information Processing Systems. Cambridge, MA: MIT, 2013: 3111–3119
    [38]
    Peters M, Neumann M, Iyyer M, et al. Deep contextualized word representations [C] //Proc of the 16th Annual Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2018: 2227−2237
    [39]
    Devlin J, Chang Mingwei, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding [C] //Proc of the 17th Annual Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4171−4186
    [40]
    Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need [C] //Proc of the 31st Advances in Neural Information Processing Systems. Cambridge, MA: MIT, 2017: 5998−6008
    [41]
    Liu Jian, Chen Yubo, Liu Kang, et al. How does context matter? On the robustness of event detection with context-selective mask generalization [C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing: Findings. Stroudsburg, PA: ACL, 2020: 2523−2532
    [42]
    陈佳丽, 洪宇, 王捷, 等. 利用门控机制融合依存与语义信息的事件检测方法[J], 中文信息学报, 2020, 34(8): 51−60

    Chen Jiali, Hong Yu, Wang Jie, et al. Combination of dependency and semantic information via gated mechanism for event detection [J]. Journal of Chinese Information Processing, 2020, 34(8): 51−60 (in Chinese)
    [43]
    王捷, 洪宇, 陈佳丽, 等. 基于共享BERT和门控多任务学习的事件检测方法[J], 中文信息学报, 2021, 35(10): 101−109

    Wang Jie, Hong Yu, Chen Jiali, et al. Event detection by shared BERT and gated multi-task learning [J]. Journal of Chinese Information Processing, 2021, 35(10): 101−109 (in Chinese)
  • Related Articles

    [1]Zeng Zhi, Zhao Shuqing, Liu Huan, Zhao Xiang, Luo Minnan. Event-Driven Hypergraph Convolutional Network Based Rumor Detection Method[J]. Journal of Computer Research and Development, 2024, 61(8): 1982-1992. DOI: 10.7544/issn1000-1239.202440136
    [2]Lin Fu, Li Mingkang, Luo Xuexiong, Zhang Shuhao, Zhang Yue, Wang Zitong. Anomaly-Aware Variational Graph Autoencoder Based Graph-Level Anomaly Detection Algorithm[J]. Journal of Computer Research and Development, 2024, 61(8): 1968-1981. DOI: 10.7544/issn1000-1239.202440177
    [3]Chen Kejia, Lu Hao, Zhang Jiajun. Conditional Variational Time-Series Graph Auto-Encoder[J]. Journal of Computer Research and Development, 2020, 57(8): 1663-1673. DOI: 10.7544/issn1000-1239.2020.20200202
    [4]Shi Shengfei, Zhang Wei, Li Jianzhong. A Complex Event Detection Algorithm Based on Correlation Analysis[J]. Journal of Computer Research and Development, 2014, 51(8): 1871-1879. DOI: 10.7544/issn1000-1239.2014.20120813
    [5]Xu Xiaolong, Geng Weijian, Yang Geng, Li Lingjuan, Yang Zhen. An Efficient Fault-Tolerant Event and Event Boundary Detection Algorithm for Wireless Sensor Networks[J]. Journal of Computer Research and Development, 2014, 51(5): 997-1008.
    [6]Tong Ming, Ding Liwei, and Ji Chenglong. Fusion of HCRF and AAM Highlight Events Detection in Soccer Videos[J]. Journal of Computer Research and Development, 2014, 51(1): 225-236.
    [7]Jiang Guang, Wang Xiaofeng, Shi Zhongzhi. A Rule Learning Algorithm for Event Detection Based on Semantic Trajectory[J]. Journal of Computer Research and Development, 2012, 49(12): 2623-2630.
    [8]Sun Ji'nan, Huang Yu, Huang Shuzhi, Zhang Shikun, Yuan Chongyi. Formal Method Based on Petri Nets to Detect RFID Event[J]. Journal of Computer Research and Development, 2012, 49(11): 2334-2343.
    [9]Peng Shanglian, Li Zhanhuai, Li Qiang, Chen Qun, Liu Hailong. Multiple Objects Event Detection over RFID Data Streams[J]. Journal of Computer Research and Development, 2012, 49(9): 1910-1925.
    [10]Zhang Kuo, Li Juanzi, Wu Gang, and Wang Kehong. Term-Committee-Based Event Identification Within Topics[J]. Journal of Computer Research and Development, 2009, 46(2): 245-252.
  • Cited by

    Periodical cited type(5)

    1. 张涵,于航,周继威,白云开,赵路坦. 面向隐私计算的可信执行环境综述. 计算机应用. 2025(02): 467-481 .
    2. 付裕,林璟锵,冯登国. 虚拟化与密码技术应用:现状与未来. 密码学报(中英文). 2024(01): 3-21 .
    3. 徐传康,李忠月,刘天宇,种统洪,杨发雪. 基于可信执行环境的汽车域控系统安全研究. 汽车实用技术. 2024(15): 18-25+73 .
    4. 徐文嘉,岑孟杰,陈亮. 隐私保护下单细胞RNA测序数据细胞分类研究. 医学信息学杂志. 2024(10): 86-89 .
    5. 孙钰,熊高剑,刘潇,李燕. 基于可信执行环境的安全推理研究进展. 信息网络安全. 2024(12): 1799-1818 .

    Other cited types(4)

Catalog

    Article views (141) PDF downloads (90) Cited by(9)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return