Aspect-based sentiment analysis has become one of the hottest research issues in the field of natural language processing. It identifies the aspect sentiment polarity of texts by learning from context information, which can effectively help people understand the sentiment expression on different aspects. Currently, the most models with combining attention mechanism and neural network only consider a single level of attention information. When solving aspect-based sentiment analysis tasks, theses models have a lot of limitations. The convolutional neural network cannot capture the global structural information. For the recurrent neural network, the training time-consuming is too long, and the degree of dependence between words gradually decreases as the distance increases. To solve the above problems, we propose the dual-attention networks for aspect-level sentiment analysis (DANSA) model. Firstly, by introducing the multi-head attention mechanism, the model performs multiple linear transformation on the input to obtain more comprehensive attention information, which can realize parallel computing and enhance the training speed. Secondly, the self-attention mechanism is introduced to obtain global structural information by calculating the attention scores between each word and all other words in the input, and the degree of dependence between words is not affected by time and sentence length. Finally, the model makes a prediction of aspects sentiment polarity by combining the context self-attention information and the aspect of the word attention information. The extensive experiments on the SemEval2014 datasets and the Twitter datasets show that the DANSA achieves better classification performance, which further demonstrates the validity of DANSA.