Advanced Search
    Li Chaofan, Chen Songcan. Self-Supervised EEG Classification with Multi-Level Feature Modeling and Spatiotemporal Dependence Mining[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550351
    Citation: Li Chaofan, Chen Songcan. Self-Supervised EEG Classification with Multi-Level Feature Modeling and Spatiotemporal Dependence Mining[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550351

    Self-Supervised EEG Classification with Multi-Level Feature Modeling and Spatiotemporal Dependence Mining

    • Multi-channel electroencephalography (EEG), as a non-invasive technique, records brain electrical activity through multiple electrodes placed on the scalp, aiding in understanding individual psychological states and assisting in the diagnosis of various diseases. Given the high cost and technical challenges of annotating large-scale EEG data, self-supervised learning (SSL), as a label-free learning paradigm, has garnered widespread attention in the EEG domain. SSL leverages the intrinsic structure of data to learn representations, thereby effectively enhancing the model's generalization ability. Despite significant advancements in current research, the field still faces the following challenges: first, multi-channel EEG data contain complex spatiotemporal correlations, yet many existing methods are confined to modeling either temporal or spatial dimensions in isolation, rather than integrating both comprehensively, thus limiting the ability to profoundly understand the intrinsic complexities of EEG signals; second, many current approaches have not effectively integrated segment-level and instance-level information. The former contributes to improving the model's generalization ability, while the latter aids the model in better adapting to downstream (classification) tasks. To address these challenges, we propose and implement a self-supervised pre-training framework that combines contrastive and reconstruction strategies. Specifically, by introducing a channel masking strategy based on temporal mask reconstruction, our framework effectively captures the spatiotemporal relationships of EEG data. By capturing inter-segment relationships in a fine-grained manner, the framework enhances both the model's performance and its generalization ability. Meanwhile, integrating instance-level contrastive learning with masked reconstruction tasks helps the model learn representations with instance discriminability and local perceptual ability, thus facilitating better adaptation to downstream tasks. Additionally, the introduction of the self-paced learning mechanism further enhances the model's generalization ability. Finally, experimental results across multiple EEG tasks demonstrate the effectiveness of the proposed approach.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return