With the rapid development of artificial intelligence, deep neural networks have been widely applied in the fields of computer vision, signal analysis, and natural language processing. It helps machines process understand and use human language through functions such as syntax analysis, semantic analysis, and text comprehension. However, existing studies have shown that deep models are vulnerable to the attacks from adversarial texts. Adding imperceptible adversarial perturbations to normal texts, natural language processing models can make wrong predictions. To improve the robustness of the natural language processing model, defense-related researches have also developed in recent years. Based on the existing researches, we comprehensively detail related works in the field of adversarial attacks, defenses, and robustness analysis in natural language processing tasks. Specifically, we first introduce the research tasks and related natural language processing models. Then, attack and defense approaches are stated separately. The certified robustness analysis and benchmark datasets of natural language processing models are further investigated and a detailed introduction of natural language processing application platforms and toolkits is provided. Finally, we summarize the development direction of research on attacks and defenses in the future.