• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei. Mixture of Variational Autoencoder[J]. Journal of Computer Research and Development, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204
Citation: Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei. Mixture of Variational Autoencoder[J]. Journal of Computer Research and Development, 2020, 57(1): 136-144. DOI: 10.7544/issn1000-1239.2020.20190204

Mixture of Variational Autoencoder

Funds: This work was supported by the National Natural Science Foundation of China (61402332, 61402331,11803022), the Natural Science Foundation of Tianjin City (17JCQNJC00400, 18JCQNJC69800), the Science and Technology Development Foundation of Higher Education Institutions of Tianjin (2017KJ034, 2017KJ035, 2018KJ106), and the Foundation for Young Teachers of Tianjin University of Science and Technology (2017LG10).
More Information
  • Published Date: December 31, 2019
  • Variational autoencoder (VAE) is a generated model with continuous latent variables, where the objective function is constructed by the variational approximation, and both the generated part and the inference part are based on neural networks. The traditional variational autoencoder assumes that the multi-dimensional latent variables of inference model are independent, which simplifies the inference process, but makes poor lower bound of the objective function and also limits the representation ability of the latent space. In this paper, we propose a mixture of variational autoencoder (MVAE), which generates the data through the variational autoencoder components. For this model, we first take the continuous latent vector with a Gaussian prior as the hidden layer, and take the discrete latent vector with a polynomial prior as the indicator vector of the component. Then we use the reparameterization trick and stick-breaking parameterization to rewrite the variational optimization objective, and train the model parameters through stochastic gradient descent. The mixture of variational autoencoder improves the inference accuracy using the complex model structure, expands the representation space of latent vector by the mixture of components, and effectively solves the corresponding optimization problems through the reparameterization and the stick-breaking parameterization technologies. Finally, we design the comparison experiments on datasets to demonstrate the higher inference accuracy and stronger representation ability of the latent variables.
  • Related Articles

    [1]Bi Fenglin, Zhang Qiming, Zhang Jiarui, Wang Yantong, Chen Yang, Zhang Yanbin, Wang Wei, Zhou Xuan. A Retrieval-Augmented Generation System Based on a Sliding Window Strategy in Large Language Models[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440411
    [2]Lin Meng, Dai Chengwei, Guo Tao. A Method for Generating Explanations of Offensive Memes Based on Multimodal Large Language Models[J]. Journal of Computer Research and Development, 2024, 61(5): 1206-1217. DOI: 10.7544/issn1000-1239.202330960
    [3]Wang Mengru, Yao Yunzhi, Xi Zekun, Zhang Jintian, Wang Peng, Xu Ziwen, Zhang Ningyu. Safety Analysis of Large Model Content Generation Based on Knowledge Editing[J]. Journal of Computer Research and Development, 2024, 61(5): 1143-1155. DOI: 10.7544/issn1000-1239.202330965
    [4]Yang Bin, Wang Zhengyang, Cheng Zihang, Zhao Huiying, Wang Xin, Guan Yu, Cheng Xinzhou. Customer Churn Prediction Based on Generation Data Reconstruction Using Diffusion Model[J]. Journal of Computer Research and Development, 2024, 61(2): 324-337. DOI: 10.7544/issn1000-1239.202330742
    [5]Zhang Naizhou, Cao Wei, Zhang Xiaojian, Li Shijun. Conversation Generation Based on Variational Attention Knowledge Selection and Pre-trained Language Model[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440551
    [6]Chen Yarui, Yang Jucheng, Shi Yancui, Wang Yuan, Zhao Tingting. Survey of Variational Inferences in Probabilistic Generative Models[J]. Journal of Computer Research and Development, 2022, 59(3): 617-632. DOI: 10.7544/issn1000-1239.20200637
    [7]Song Kehui, Zhang Ying, Zhang Jiangwei, Yuan Xiaojie. A Generative Model for Synthesizing Structured Datasets Based on GAN[J]. Journal of Computer Research and Development, 2019, 56(9): 1832-1842. DOI: 10.7544/issn1000-1239.2019.20180353
    [8]Wang Zhenwen, Xiao Weidong, and Tan Wentang. Classification in Networked Data Based on the Probability Generative Model[J]. Journal of Computer Research and Development, 2013, 50(12): 2642-2650.
    [9]Jiang Jinsong, Yan Kun, Ni Guiqiang, He Ming, and Yang Bo. Generic GUI Generator Based on XML and XSD[J]. Journal of Computer Research and Development, 2012, 49(4): 826-832.
    [10]Zhang Min, Feng Dengguo, and Chen Chi. A Security Function Test Suite Generation Method Based on Security Policy Model[J]. Journal of Computer Research and Development, 2009, 46(10): 1686-1692.
  • Cited by

    Periodical cited type(4)

    1. 陈亚瑞,杨巨成,史艳翠,王嫄,赵婷婷. 概率生成模型变分推理方法综述. 计算机研究与发展. 2022(03): 617-632 . 本站查看
    2. 敦瑞静,鲁淑霞,张琦,翟俊海. 基于行列式点过程的变分拉普拉斯自编码器. 南京大学学报(自然科学). 2022(04): 629-639 .
    3. 陈亚瑞,杨剑宁,吴世伟,刘垚,王晓捷. 基于四元组度量损失的多模态变分自编码模型. 天津科技大学学报. 2022(06): 45-53+62 .
    4. 林俊钒,赵伟. 变分自编码模型在周期性KPI指标异常检测中的应用研究. 信息通信. 2020(07): 206-208 .

    Other cited types(1)

Catalog

    Article views (996) PDF downloads (291) Cited by(5)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return