Event detection aims to detect triggers in sentences and classify them into pre-defined event types. The key factors lie in appropriately representing triggers. Existing representation-based approaches learn the semantic representation of candidate triggers through complex deep neural networks to improve the performance of models. However, they ignore two important problems: 1) affected by sentence context, the same trigger can trigger different event types; 2) due to the diversity of natural language expression, different triggers can trigger the same event type. Inspired by hidden variables in the variational auto-encoder (VAE) and hierarchical structure in other natural language processing (NLP) tasks, we propose a hierarchical latent semantic-driven network (HLSD) for event detection to address the above two problems through latent semantic information of sentences and words. The model reduces the dimension from the text representation space to the new latent semantic space and explores the more essential influence information in the macro and micro context of events. Firstly, we get the representation of a sentence and the words through BERT. Secondly, a dual latent semantic mechanism is designed, and VAE is used to mine the latent semantic information at the sentence and word level. Finally, from the perspective of different granularity contexts, a hierarchical structure from coarse to fine is proposed to make full use of the latent semantic information of sentences and words, to improve the performance of the model. The experimental results on the ACE2005 corpus show that the F1
performance of the proposed method achieves 77.9%. In addition, this paper quantitatively analyzes the above two problems in the experiment, which proves the effectiveness of the method.