高级检索

    一种基于混合模型转换的高性能脉冲神经网络

    SpikingReLU: A High-Performance Spiking Neural Network Based on Hybrid Model Conversion

    • 摘要: 受大脑启发的脉冲神经网络(spiking neural networks, SNNs)由于其强大的时空编码能力和事件驱动计算等特性展现出巨大的发展潜力。目前,主流的SNNs方法分为将训练完备的人工神经网络(artificial neural networks, ANNs)转换为SNNs以及直接训练SNNs这2类。然而,现有的ANNs转换方法需要较长的时间步减少转换误差,直接训练的SNNs则受限于不足的网络表征能力。针对上述挑战,提出一种基于混合模型转换的高性能脉冲神经网络。所提方法将模型训练与推理解耦:在训练阶段,策略性地将SNNs中部分脉冲神经元替换为ReLU激活函数,增强网络的特征学习能力,从而构建一个同时包含ANN与SNN的高性能混合模型;由于混合模型中的ANN部分无法进行事件驱动计算,进一步引入重参数化技巧,在不损失性能的前提下,将混合模型的推理过程全部转换为事件驱动计算,从而使所提方法兼具高性能和事件驱动计算的优势。实验结果表明:所提方法在CIFAR-10和CIFAR-100数据集上的分类精度为97.31%和83.34%,在ImageNet数据集上的分类精度为70.89%,在神经静态数据集CIFAR10-DVS的分类精度为82.71%,均优于现有国内外最先进的方法。

       

      Abstract: Brain-inspired spiking neural networks (SNNs) have shown considerable promise due to their strong spatiotemporal encoding capabilities and event-driven computation. Current mainstream SNN approaches can be divided into two categories: converting pre-trained Artificial Neural Networks (ANNs) into SNNs (ANN conversion methods) and directly training SNNs. However, ANN conversion methods require large time steps to reduce conversion errors, while directly trained SNNs suffer from poor representational capacity. To address these challenges, we propose a high-performance spiking neural network based on hybrid model conversion. The proposed method decouples model training and inference. During the training phase, a portion of the spiking neurons in the SNNs are strategically replaced with ReLU activation functions to enhance the feature learning ability, thereby constructing a high-performance hybrid model that integrates both ANN and SNN components. Since the ANN part of the hybrid model cannot perform event-driven computation, we further introduce a reparameterization technique to convert the entire inference process of the hybrid model into event-driven computation without performance loss. Therefore, the proposed method combines the advantages of high performance and event-driven computation. Experimental results show that the proposed method achieves classification accuracies of 97.31% and 83.34% on CIFAR-10 and CIFAR-100, respectively, 70.89% on ImageNet, and 82.71% on the neuromorphic dataset CIFAR10-DVS, outperforming state-of-the-art methods.

       

    /

    返回文章
    返回