Abstract:
Event detection from video is important work for video retrieval and semantic understanding. Trajectories of moving objects in the video not only record the moving information, but also reflect the motivations of the moving objects, and are closely related with the event. However, the raw trajectory is only geographic information of an object without any domain knowledge. Meanwhile, semantic gap exists between the low-level feature extracted and the according high-level concept in the content-based video analysis. Thus, it is critical to combine both the raw trajectory and its semantic information. In this regard, extracting event using the semantic trajectories which are analyzed from the video is studied, domain knowledge is utilized to label interest areas in the video, and a new semantic trajectory representation is proposed which includes interest areas the object stops and passes by. Moreover, the original trajectory of the interest object can be converted into an according semantic trajectory, so video event can be represented as regular expressions of relationship between objects and interest areas. Inspired by FOIL (first order inductive learner), an inductive-based event rule learning algorithm is proposed, and the regular expression is illustrated to be easier learned than the traditional well-formed formula in the first-order predicate logic. Finally, experiment results indicate the performance.