• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

基于时空融合图网络学习的视频异常事件检测

周航, 詹永照, 毛启容

周航, 詹永照, 毛启容. 基于时空融合图网络学习的视频异常事件检测[J]. 计算机研究与发展, 2021, 58(1): 48-59. DOI: 10.7544/issn1000-1239.2021.20200264
引用本文: 周航, 詹永照, 毛启容. 基于时空融合图网络学习的视频异常事件检测[J]. 计算机研究与发展, 2021, 58(1): 48-59. DOI: 10.7544/issn1000-1239.2021.20200264
Zhou Hang, Zhan Yongzhao, Mao Qirong. Video Anomaly Detection Based on Space-Time Fusion Graph Network Learning[J]. Journal of Computer Research and Development, 2021, 58(1): 48-59. DOI: 10.7544/issn1000-1239.2021.20200264
Citation: Zhou Hang, Zhan Yongzhao, Mao Qirong. Video Anomaly Detection Based on Space-Time Fusion Graph Network Learning[J]. Journal of Computer Research and Development, 2021, 58(1): 48-59. DOI: 10.7544/issn1000-1239.2021.20200264
周航, 詹永照, 毛启容. 基于时空融合图网络学习的视频异常事件检测[J]. 计算机研究与发展, 2021, 58(1): 48-59. CSTR: 32373.14.issn1000-1239.2021.20200264
引用本文: 周航, 詹永照, 毛启容. 基于时空融合图网络学习的视频异常事件检测[J]. 计算机研究与发展, 2021, 58(1): 48-59. CSTR: 32373.14.issn1000-1239.2021.20200264
Zhou Hang, Zhan Yongzhao, Mao Qirong. Video Anomaly Detection Based on Space-Time Fusion Graph Network Learning[J]. Journal of Computer Research and Development, 2021, 58(1): 48-59. CSTR: 32373.14.issn1000-1239.2021.20200264
Citation: Zhou Hang, Zhan Yongzhao, Mao Qirong. Video Anomaly Detection Based on Space-Time Fusion Graph Network Learning[J]. Journal of Computer Research and Development, 2021, 58(1): 48-59. CSTR: 32373.14.issn1000-1239.2021.20200264

基于时空融合图网络学习的视频异常事件检测

基金项目: 国家自然科学基金项目(61672268)
详细信息
  • 中图分类号: TP391

Video Anomaly Detection Based on Space-Time Fusion Graph Network Learning

Funds: This work was supported by the National Natural Science Foundation of China (61672268).
  • 摘要: 视频中异常事件所体现的时空特征存在着较强的相关关系.针对视频异常事件发生的时空特征相关性而影响检测性能问题,提出了基于时空融合图网络学习的视频异常事件检测方法,该方法针对视频片段的特征分别构建空间相似图和时间连续图,将各片段对应为图中的节点,考虑各节点特征与其他节点特征的Top-k相似性动态形成边的权重,构成空间相似图;考虑各节点的m个时间段内的连续性形成边的权重,构成时间连续图.将空间相似图和时间连续图进行自适应加权融合形成时空融合图卷积网络,并学习生成视频特征.在排序损失中加入图的稀疏项约束降低图模型的过平滑效应并提升检测性能.在UCF-Crime和ShanghaiTech等视频异常事件数据集上进行了实验,以接收者操作曲线(receiver operating characteristic curve, ROC)以及曲线下面积(area under curve, AUC)值作为性能度量指标.在UCF-Crime数据集下,提出的方法在AUC上达到80.76%,比基准线高5.35%;在ShanghaiTech数据集中,AUC达到89.88%,比同类最好的方法高5.44%.实验结果表明:所提出的方法可有效提高视频异常事件检测的性能.
    Abstract: There are strong correlations among spatial-temporal features of abnormal events in videos. Aiming at the problem of performance for abnormal event detection caused by these correlations, a video anomaly detection method based on space-time fusion graph network learning is proposed. In this method, spatial similarity graph and temporal trend graph for the segments are constructed in terms of the features of the segments. The spatial similarity graph is built dynamically by treating the features of the video segments as the vertexes in graph. In this graph, the weights of edges are dynamically formed by taking the relationship between vertex and its Top-k similarity vertexes into account. The temporal trend graph is built by taking the time distance for m sequential segments into account. The space-time fusion graph convolutional network is constructed by adaptively weighting the spatial similarity graph and temporal trend graph. The video embedding features are learnt and generated by using this graph convolutional network. A graph sparse regularization is added to the ranking loss, in order to reduce the over-smoothing effect of graph model and improve detection performance. The experiments are conducted on two challenging video datasets: UCF-Crime(University of Central Florida crime dataset) and ShanghaiTech. ROC(receiver operating characteristic curve) and AUC (area under curve) are taken as performance metrics. Our method obtains the AUC score of 80.76% rising by 5.35% compared with the baseline on UCF-Crime dataset, and also gets the score of 89.88% rising by 5.44% compared with SOTA(state of the art) weakly supervised algorithm on ShanghaiTech. The experimental results show that our proposed method can improve the performance of video abnormal event detection effectively.
  • 期刊类型引用(12)

    1. 谭奕鑫,詹永照,刘洪麟. 基于显著特征和时空图网络的视频异常事件检测. 江苏大学学报(自然科学版). 2025(02): 179-188 . 百度学术
    2. 黄金钾,詹永照,赵逸飞. 整体与局部相互感知的图网络时序动作检测. 江苏大学学报(自然科学版). 2024(01): 67-76 . 百度学术
    3. 李南君,李爽,李拓,邹晓峰,王长红. 面向边缘端设备的轻量化视频异常事件检测方法. 计算机应用研究. 2024(01): 306-313+320 . 百度学术
    4. 柳德云,李莹,周震,吉根林. 基于时空依赖关系和特征融合的弱监督视频异常检测. 数据采集与处理. 2024(01): 204-214 . 百度学术
    5. 肖剑,刘天元,吴祥,吉根林. 基于前景对象检测和回归的视频异常检测方法. 南京师大学报(自然科学版). 2024(02): 117-128 . 百度学术
    6. 夏惠芬,詹永照,刘洪麟,任晓鹏. 通过类别特定帧聚类增强动作显著性的弱监督时序动作检测(英文). Frontiers of Information Technology & Electronic Engineering. 2024(06): 809-824 . 百度学术
    7. 朱新瑞,钱小燕,施俞洲,陶旭东,李智昱. 长短期时间序列关联的视频异常事件检测. 中国图象图形学报. 2024(07): 1998-2010 . 百度学术
    8. 张红民,颜鼎鼎,田钱前. 改进时空图卷积网络的视频异常检测方法. 光电工程. 2024(05): 48-60 . 百度学术
    9. 杨亚让,吴云虎. 基于随机森林的无线传感器通信网络阻塞故障检测. 吉林大学学报(工学版). 2023(05): 1490-1495 . 百度学术
    10. 吴德刚,赵利平,陈乾辉,张宇波. 基于双尺度串行网络的视频异常行为检测. 广西科学. 2023(03): 575-586 . 百度学术
    11. 梁硕,韩翔宇,李慧,王书强. 分布式网络异常节点挖掘检测方法仿真. 计算机仿真. 2023(07): 409-413 . 百度学术
    12. 涂荣成,毛先领,孔伟杰,蔡成飞,赵文哲,王红法,黄河燕. 基于CLIP生成多事件表示的视频文本检索方法. 计算机研究与发展. 2023(09): 2169-2179 . 本站查看

    其他类型引用(18)

计量
  • 文章访问数:  1165
  • HTML全文浏览量:  3
  • PDF下载量:  794
  • 被引次数: 30
出版历程
  • 发布日期:  2020-12-31

目录

    /

    返回文章
    返回