%A Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei
%T Mixture of Variational Autoencoder
%0 Journal Article
%D 2020
%J Journal of Computer Research and Development
%R 10.7544/issn1000-1239.2020.20190204
%P 136-144
%V 57
%N 1
%U {https://crad.ict.ac.cn/CN/abstract/article_4084.shtml}
%8 2020-01-01
%X Variational autoencoder (VAE) is a generated model with continuous latent variables, where the objective function is constructed by the variational approximation, and both the generated part and the inference part are based on neural networks. The traditional variational autoencoder assumes that the multi-dimensional latent variables of inference model are independent, which simplifies the inference process, but makes poor lower bound of the objective function and also limits the representation ability of the latent space. In this paper, we propose a mixture of variational autoencoder (MVAE), which generates the data through the variational autoencoder components. For this model, we first take the continuous latent vector with a Gaussian prior as the hidden layer, and take the discrete latent vector with a polynomial prior as the indicator vector of the component. Then we use the reparameterization trick and stick-breaking parameterization to rewrite the variational optimization objective, and train the model parameters through stochastic gradient descent. The mixture of variational autoencoder improves the inference accuracy using the complex model structure, expands the representation space of latent vector by the mixture of components, and effectively solves the corresponding optimization problems through the reparameterization and the stick-breaking parameterization technologies. Finally, we design the comparison experiments on datasets to demonstrate the higher inference accuracy and stronger representation ability of the latent variables.