ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (1): 145-158.doi: 10.7544/issn1000-1239.2020.20190180

• 人工智能 • 上一篇    下一篇

基于特征融合时序分割网络的行为识别研究

李洪均1,2,3,4,丁宇鹏1,李超波1,张士兵1,3   

  1. 1(南通大学信息科学技术学院 江苏南通 226019);2(计算机软件新技术国家重点实验室(南京大学) 南京 210023);3(南通智能信息技术联合研究中心 江苏南通 226019);4(通科微电子学院 江苏南通 226019) (lihongjun@ntu.edu.cn)
  • 出版日期: 2020-01-01
  • 基金资助: 
    国家自然科学基金项目(61871241);教育部产学研合作协同育人基金项目(201802302115);中国交通教育研究会教育科学研究课题(交教研1802-118);南通市科技计划资助项目(JC2018025,JC2018129);南京大学计算机软件新技术国家重点实验室基金项目(KFKT2019B015);江苏省研究生科研与实践创新计划项目(KYCX19_2056);南通大学-南通智能信息技术联合研究中心基金项目(KFKT2017B04)

Action Recognition of Temporal Segment Network Based on Feature Fusion

Li Hongjun1,2,3,4, Ding Yupeng1, Li Chaobo1, Zhang Shibing1,3   

  1. 1(School of Information Science and Technology, Nantong University, Nantong, Jiangsu 226019);2(State Key Laboratory for Novel Software Technology (Nanjing University), Nanjing 210023);3(Nantong Research Institute for Advanced Communication Technologies, Nantong, Jiangsu 226019);4(Tongke School of Microelectronics, Nantong, Jiangsu 226019)
  • Online: 2020-01-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61871241), the Ministry of Education Cooperation in Production and Education (201802302115), the Educational Science Research Subject of China Transportation Education Research Association (Jiaotong Education Research 1802-118), the Science and Technology Program of Nantong (JC2018025, JC2018129), the Nanjing University State Key Laboratory for Novel Software Technology (KFKT2019B015), the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX19_2056), and the Nantong University-Nantong Joint Research Center for Intelligent Information Technology (KFKT2017B04).

摘要: 行为识别是当今计算机视觉领域的一个研究热点,是一项具有挑战性的任务.行为识别分析与其网络输入数据类型、网络结构、特征融合环节具有密切联系.目前,主流的行为识别网络输入数据为RGB图像和光流图像,网络结构主要以双流和3D卷积为主;而特征选择直接影响到识别的效率,多层次的特征融合工作还有很多问题有待解决.针对主流的双流卷积网络输入数据为RGB图像和光流图像的局限,利用低秩空间中稀疏特征能够有效捕捉视频中运动物体信息的特点,对网络输入数据进行补充.同时,针对网络中缺乏信息交互的特点,将深度网络中高层语义信息和低层细节信息结合起来共同识别行为动作,使时序分割网络性能更具优势.在行为识别数据集UCF101和HMDB51上取得了97.1%和76.7%的识别效果,较目前主流算法有了较大的提升.实验结果表明,该方法能够有效地提高行为识别的识别率.

关键词: 行为识别, 稀疏特征, 时序分割网络, 双流卷积网络, 特征融合

Abstract: Action recognition is a research hot topic and a challenging task in the field of computer vision nowadays. Action recognition analysis is closely related to its network input data type, network structure and feature fusion. At present, the main input data of action recognition network is RGB images and optical flow images, and the network structure is mainly based on two-stream and three dimension convolution. While the selection of features directly affects the efficiency of recognition and there are still many problems to be solved in multi-layer feature fusion. In view of the limitation of the RGB images and optical flow images which are the input of the popular two-stream convolution network, using sparse features in low rank space can effectively capture the information characteristics of moving objects in video and supplement the network input data. Meanwhile, for the lack of information interaction in the deep network, the high-level semantic information and the low-level detailed information are combined to recognize actions together, which makes temporal segment network performance more advantageous. Extensive experiments in subjective and objective comparison are performed on UCF101 and HMDB51 and the results show that the proposed algorithm is significantly better than several state-of-the-art algorithms, and the average accuracy rate of the proposed algorithm reaches 97.1% and 76.7%. The experimental results show that our method can effectively improve the recognition rate of action recognition.

Key words: action recognition, sparse features, temporal segment network, two-stream convolution network, feature fusion

中图分类号: