高级检索
    同鸣, 王凡, 王硕, 姬成龙. 一种3DHOGTCC和3DHOOFG的行为识别新框架[J]. 计算机研究与发展, 2015, 52(12): 2802-2812. DOI: 10.7544/issn1000-1239.2015.20140553
    引用本文: 同鸣, 王凡, 王硕, 姬成龙. 一种3DHOGTCC和3DHOOFG的行为识别新框架[J]. 计算机研究与发展, 2015, 52(12): 2802-2812. DOI: 10.7544/issn1000-1239.2015.20140553
    Tong Ming, Wang Fan, Wang Shuo, Ji Chenglong. A New Framework of Action Recognition: 3DHOGTCC and 3DHOOFG[J]. Journal of Computer Research and Development, 2015, 52(12): 2802-2812. DOI: 10.7544/issn1000-1239.2015.20140553
    Citation: Tong Ming, Wang Fan, Wang Shuo, Ji Chenglong. A New Framework of Action Recognition: 3DHOGTCC and 3DHOOFG[J]. Journal of Computer Research and Development, 2015, 52(12): 2802-2812. DOI: 10.7544/issn1000-1239.2015.20140553

    一种3DHOGTCC和3DHOOFG的行为识别新框架

    A New Framework of Action Recognition: 3DHOGTCC and 3DHOOFG

    • 摘要: 行为识别在语义分析领域具有很高的学术研究价值和广泛的市场应用前景.为了实现对视频行为的准确描述, 提出了2类构建稠密轨迹运动描述子的方法.1)通过光流约束和聚类,实现对运动区域的稠密采样,以获取行为的局部位置信息;2)选取目标运动角点为特征点,通过对特征点的跟踪获取运动轨迹;3)在以轨迹为中心的视频立方体内,分别构建三维梯度方向直方图(3D histograms of oriented gradients in trajectory centered cube, 3DHOGTCC)描述子和三维光流梯度方向直方图(3D histograms of oriented optical flow gradients, 3DHOOFG)描述子,用以对运动的局部信息进行准确描述.为了充分利用行为发生的场景信息,提出了一种融合动态描述子和静态描述子的行为识别新框架,使得动态特征与静态特征相互融合支撑,即使在摄像头运动等复杂场景下,亦能取得较好的识别效果.在Weizmann和UCF-Sports数据库采用留一交叉验证,在KTH和Youtube数据库采用4折交叉验证.实验证明了提出新框架的有效性.

       

      Abstract: Video action recognition has a high academic research value and wide market application prospect in the field of video semantic analysis. In order to achieve an accurate description of video action, two motion descriptors based on dense trajectories are proposed in this paper. Firstly, to capture the local motion information of the action, dense sampling in motion object region is done by constraining and clustering of optical flow. Secondly, the corners of the motion target have been selected as the feature points which are tracked to obtain dense trajectories. Finally, 3D histograms of oriented gradients in trajectory centered cube (3DHOGTCC) descriptor and 3D histograms of oriented optical flow gradients (3DHOOFG) descriptor are constructed separately in the video cube centered on the trajectories to describe the local area of motion accurately. To make full use of the scene information that action occurs, a framework combined with motion descriptors and static descriptors is proposed in this paper, which makes the dynamic characteristics and static background features fusion and supplement mutually and also achieves better recognition accuracy even in complex scenes such as camera movement, etc. This paper adopts the leave one out cross validation on the datasets of Weizmann and UCF-Sports, and adopts the four-fold cross validation on the datasets of KTH and Youtube, and the experiments show the effectiveness of the new framework.

       

    /

    返回文章
    返回