ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2020, Vol. 57 ›› Issue (1): 145-158.doi: 10.7544/issn1000-1239.2020.20190180

Previous Articles     Next Articles

Action Recognition of Temporal Segment Network Based on Feature Fusion

Li Hongjun1,2,3,4, Ding Yupeng1, Li Chaobo1, Zhang Shibing1,3   

  1. 1(School of Information Science and Technology, Nantong University, Nantong, Jiangsu 226019);2(State Key Laboratory for Novel Software Technology (Nanjing University), Nanjing 210023);3(Nantong Research Institute for Advanced Communication Technologies, Nantong, Jiangsu 226019);4(Tongke School of Microelectronics, Nantong, Jiangsu 226019)
  • Online:2020-01-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61871241), the Ministry of Education Cooperation in Production and Education (201802302115), the Educational Science Research Subject of China Transportation Education Research Association (Jiaotong Education Research 1802-118), the Science and Technology Program of Nantong (JC2018025, JC2018129), the Nanjing University State Key Laboratory for Novel Software Technology (KFKT2019B015), the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX19_2056), and the Nantong University-Nantong Joint Research Center for Intelligent Information Technology (KFKT2017B04).

Abstract: Action recognition is a research hot topic and a challenging task in the field of computer vision nowadays. Action recognition analysis is closely related to its network input data type, network structure and feature fusion. At present, the main input data of action recognition network is RGB images and optical flow images, and the network structure is mainly based on two-stream and three dimension convolution. While the selection of features directly affects the efficiency of recognition and there are still many problems to be solved in multi-layer feature fusion. In view of the limitation of the RGB images and optical flow images which are the input of the popular two-stream convolution network, using sparse features in low rank space can effectively capture the information characteristics of moving objects in video and supplement the network input data. Meanwhile, for the lack of information interaction in the deep network, the high-level semantic information and the low-level detailed information are combined to recognize actions together, which makes temporal segment network performance more advantageous. Extensive experiments in subjective and objective comparison are performed on UCF101 and HMDB51 and the results show that the proposed algorithm is significantly better than several state-of-the-art algorithms, and the average accuracy rate of the proposed algorithm reaches 97.1% and 76.7%. The experimental results show that our method can effectively improve the recognition rate of action recognition.

Key words: action recognition, sparse features, temporal segment network, two-stream convolution network, feature fusion

CLC Number: