Many fragmentation information is highly dispersed in different data sources, such as text, image, video and Web. They are characterized by structural disorder and content one-sided. Current researches implement the extraction, expression and understanding of multi-modal fragmentation information by constructing visual question answering (VQA) system. The VQA task is required to provide the correct answer to a given problem with a corresponding image. The aim of this paper is to design a complete framework and algorithm for image fragmentation information question answering under the basic background of visual question answering task. The main research includes image feature extraction, question text feature extraction, multi-modal feature fusion and answer reasoning. Deep neural network is constructed to extract features for representing images and problem information. Attention mechanism and variational inference method are combined to fusion two modal features of image and problem and reason answers. Experiment results show that the model can effectively extract and understand multi-modal fragmentation information, and improve the accuracy of VQA.