Sleep CLIP: A Multimodal Sleep Staging Model Based on Sleep Signals and Sleep Staging LabelsWeijia Yang, Yuxian Wang, Jiancheng Hu, Tuming Yuan
- Electrical and Electronic Engineering
- Atomic and Molecular Physics, and Optics
- Analytical Chemistry
Since the release of the contrastive language-image pre-training (CLIP) model designed by the OpenAI team, it has been applied in several fields owing to its high accuracy. Sleep staging is an important method of diagnosing sleep disorders, and the completion of sleep staging tasks with high accuracy has always remained the main goal of sleep staging algorithm designers. This study is aimed at designing a multimodal model based on the CLIP model that is more suitable for sleep staging tasks using sleep signals and labels. The pre-training efforts of the model involve five different training sets. Finally, the proposed method is tested on two training sets (EDF-39 and EDF-153), with accuracies of 87.3 and 85.4%, respectively.