ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (7): 1522-1530.doi: 10.7544/issn1000-1239.2020.20190479

• 图形图像 • 上一篇    下一篇

基于多模态输入的对抗式视频生成方法

于海涛1,杨小汕2,徐常胜1,2   

  1. 1(合肥工业大学计算机与信息学院 合肥 230031);2(模式识别国家重点实验室(中国科学院自动化研究所) 北京 100190) (yuht@mail.hfut.edu.cn)
  • 出版日期: 2020-07-01
  • 基金资助: 
    国家重点研发计划基金项目(2018AAA0100604);国家自然科学基金项目(61702511,61720106006,61728210,61751211,U1836220,U1705262,61872424);模式识别国家重点实验室自主课题(Z-2018007)

Antagonistic Video Generation Method Based on Multimodal Input

Yu Haitao1, Yang Xiaoshan2, Xu Changsheng1,2   

  1. 1(School of Computer and Information, Hefei University of Technology, Hefei 230031);2(National Laboratory of Pattern Recognition(Institute of Automation, Chinese Academy of Sciences), Bejing 100190)
  • Online: 2020-07-01
  • Supported by: 
    This work was supported by the National Key Research and Development Program of China (2018AAA0100604), the National Natural Science Foundation of China (61702511, 61720106006, 61728210, 61751211, U1836220, U1705262, 61872424), and the Research Program of National Laboratory of Pattern Recognition (Z-2018007).

摘要: 视频生成是计算机视觉和多媒体领域一个重要而又具有挑战性的任务.现有的基于对抗生成网络的视频生成方法通常缺乏一种有效可控的连贯视频生成方式.提出一种新的多模态条件式视频生成模型.该模型使用图片和文本作为输入,通过文本特征编码网络和运动特征解码网络得到视频的运动信息,并结合输入图片生成连贯的运动视频序列.此外,该方法通过对输入图片进行仿射变换来预测视频帧,使得生成模型更加可控、生成结果更加鲁棒.在SBMG(single-digit bouncing MNIST gifs),TBMG(two-digit bouncing MNIST gifs)和KTH(kungliga tekniska hgskolan human actions)数据集上的实验结果表明:相较于现有的视频生成方法,生成结果在目标清晰度和视频连贯性方面都具有更好的效果.另外定性评估和定量评估(SSIM(structural similarity index)与PSNR(peak signal to noise ratio)指标)表明提出的多模态视频帧生成网络在视频生成中起到了关键作用.

关键词: 深度学习, 视频生成, 视频预测, 卷积神经网络, 生成对抗网络

Abstract: Video generation is an important and challenging task in the field of computer vision and multimedia. The existing video generation methods based on generative adversarial networks (GANs) usually lack an effective scheme to control the coherence of video. The realization of artificial intelligence algorithms that can automatically generate real video is an important indicator of more complete visual appearance information and motion information understanding.A new multi-modal conditional video generation model is proposed in this paper. The model uses pictures and text as input, gets the motion information of video through text feature coding network and motion feature decoding network, and generates video with coherence motion by combining the input images. In addition, the method predicts video frames by affine transformation of input images, which makes the generated model more controllable and the generated results more robust. The experimental results on SBMG (single-digit bouncing MNIST gifs), TBMG(two-digit bouncing MNIST gifs) and KTH(kungliga tekniska hgskolan human actions) datasets show that the proposed method performs better on both the target clarity and the video coherence than existing methods. In addition, qualitative evaluation and quantitative evaluation of SSIM(structural similarity index) and PSNR(peak signal to noise ratio) metrics demonstrate that the proposed multi-modal video frame generation network plays a key role in the generation process.

Key words: deep learning, video generation, video prediction, convolutional neural network, generative adversarial network (GAN)

中图分类号: