高级检索

    基于特征融合时序分割网络的行为识别研究

    Action Recognition of Temporal Segment Network Based on Feature Fusion

    • 摘要: 行为识别是当今计算机视觉领域的一个研究热点,是一项具有挑战性的任务.行为识别分析与其网络输入数据类型、网络结构、特征融合环节具有密切联系.目前,主流的行为识别网络输入数据为RGB图像和光流图像,网络结构主要以双流和3D卷积为主;而特征选择直接影响到识别的效率,多层次的特征融合工作还有很多问题有待解决.针对主流的双流卷积网络输入数据为RGB图像和光流图像的局限,利用低秩空间中稀疏特征能够有效捕捉视频中运动物体信息的特点,对网络输入数据进行补充.同时,针对网络中缺乏信息交互的特点,将深度网络中高层语义信息和低层细节信息结合起来共同识别行为动作,使时序分割网络性能更具优势.在行为识别数据集UCF101和HMDB51上取得了97.1%和76.7%的识别效果,较目前主流算法有了较大的提升.实验结果表明,该方法能够有效地提高行为识别的识别率.

       

      Abstract: Action recognition is a research hot topic and a challenging task in the field of computer vision nowadays. Action recognition analysis is closely related to its network input data type, network structure and feature fusion. At present, the main input data of action recognition network is RGB images and optical flow images, and the network structure is mainly based on two-stream and three dimension convolution. While the selection of features directly affects the efficiency of recognition and there are still many problems to be solved in multi-layer feature fusion. In view of the limitation of the RGB images and optical flow images which are the input of the popular two-stream convolution network, using sparse features in low rank space can effectively capture the information characteristics of moving objects in video and supplement the network input data. Meanwhile, for the lack of information interaction in the deep network, the high-level semantic information and the low-level detailed information are combined to recognize actions together, which makes temporal segment network performance more advantageous. Extensive experiments in subjective and objective comparison are performed on UCF101 and HMDB51 and the results show that the proposed algorithm is significantly better than several state-of-the-art algorithms, and the average accuracy rate of the proposed algorithm reaches 97.1% and 76.7%. The experimental results show that our method can effectively improve the recognition rate of action recognition.

       

    /

    返回文章
    返回