Advanced Search
    Wu Huanhuan, Xie Ruilin, Qiao Yuanxin, Chen Xiang, Cui Zhanqi. Optimizing Deep Neural Network Based on Interpretability Analysis[J]. Journal of Computer Research and Development, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803
    Citation: Wu Huanhuan, Xie Ruilin, Qiao Yuanxin, Chen Xiang, Cui Zhanqi. Optimizing Deep Neural Network Based on Interpretability Analysis[J]. Journal of Computer Research and Development, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803

    Optimizing Deep Neural Network Based on Interpretability Analysis

    • In recent years, deep neural networks (DNN) have been widely used in many fields, even replacing human to make decisions in some safety-critical systems, such as autonomous driving and smart healthcare, which requires higher reliability of DNN. It is difficult to understand the internal prediction mechanism and debug because of the complex multi-layer nonlinear network structure of DNN. The existing DNN debugging work mainly improves the performance by adjusting the parameters and augments the training set to optimize DNN. However, it is difficult to control the modification range of adjusting parameters directly, and probably make the model lose the ability of fitting the training set. And unguided augmentation of training set will dramatically increase training costs. To address this problem, a DNN optimization method OptDIA (optimizing DNN based on interpretability analysis) is proposed. Interpretability analysis is conducted on the training process and the decision-making behavior of DNN. According to the interpretability analysis results, the original training data is split into different partitions to evaluate their influences on decision-making results of DNN. After that, the partitions of original training data are transformed with different probabilities to generate new training data, which are used to retrain DNN to improve the performance of the model. The experiments on nine DNNs trained by three datasets shows that OptDIA can improve the accuracy of DNNs by 0.39% to 2.15% and F1-score of DNNs by 0.11% to 2.03%.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return