高级检索
    陈亚瑞, 蒋硕然, 杨巨成, 赵婷婷, 张传雷. 混合变分自编码[J]. 计算机研究与发展, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204
    引用本文: 陈亚瑞, 蒋硕然, 杨巨成, 赵婷婷, 张传雷. 混合变分自编码[J]. 计算机研究与发展, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204
    Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei. Mixture of Variational Autoencoder[J]. Journal of Computer Research and Development, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204
    Citation: Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei. Mixture of Variational Autoencoder[J]. Journal of Computer Research and Development, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204

    混合变分自编码

    Mixture of Variational Autoencoder

    • 摘要: 变分自编码(variational autoencoder, VAE)是一种基于连续隐向量的生成模型,通过变分近似构建目标函数,其中的生成模型及变分推理模型均采用神经网络结构.传统变分自编码模型中的变分识别模型假设多维隐变量之间是相互独立的,这种假设简化了推理过程,但是这使得变分下界过于松弛,同时限制了隐向量空间的表示能力.提出混合变分自编码(mixture of variational autoencoder, MVAE)模型,它通过多个变分自编码组件生成样本数据,丰富了变分识别模型结构,同时扩展了隐向量表示空间.该模型以连续型隐向量作为模型的隐层表示,其先验分布为高斯分布;以离散型隐向量作为各组件的指示向量,其先验分布为多项式分布.对于MVAE模型的变分优化目标,采用重参策略和折棍参数化策略处理目标函数,并用随机梯度下降方法求解模型参数.MVAE采用混合组件的方法可以增强隐变量空间的表示能力,提高近似推理精度,重参策略和折棍参数化策略可以有效求解对应的优化问题.最后在MNIST和OMNIGLOT数据集上设计了对比实验,验证了MVAE模型较高的推理精度及较强的隐变量空间表示能力.

       

      Abstract: Variational autoencoder (VAE) is a generated model with continuous latent variables, where the objective function is constructed by the variational approximation, and both the generated part and the inference part are based on neural networks. The traditional variational autoencoder assumes that the multi-dimensional latent variables of inference model are independent, which simplifies the inference process, but makes poor lower bound of the objective function and also limits the representation ability of the latent space. In this paper, we propose a mixture of variational autoencoder (MVAE), which generates the data through the variational autoencoder components. For this model, we first take the continuous latent vector with a Gaussian prior as the hidden layer, and take the discrete latent vector with a polynomial prior as the indicator vector of the component. Then we use the reparameterization trick and stick-breaking parameterization to rewrite the variational optimization objective, and train the model parameters through stochastic gradient descent. The mixture of variational autoencoder improves the inference accuracy using the complex model structure, expands the representation space of latent vector by the mixture of components, and effectively solves the corresponding optimization problems through the reparameterization and the stick-breaking parameterization technologies. Finally, we design the comparison experiments on datasets to demonstrate the higher inference accuracy and stronger representation ability of the latent variables.

       

    /

    返回文章
    返回