Loading [MathJax]/jax/output/SVG/jax.js
  • 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

一种面向耳戴式设备的用户安全连续认证方法

王勇, 熊毅, 杨天宇, 沈益冉

王勇, 熊毅, 杨天宇, 沈益冉. 一种面向耳戴式设备的用户安全连续认证方法[J]. 计算机研究与发展, 2024, 61(11): 2821-2834. DOI: 10.7544/issn1000-1239.202440415
引用本文: 王勇, 熊毅, 杨天宇, 沈益冉. 一种面向耳戴式设备的用户安全连续认证方法[J]. 计算机研究与发展, 2024, 61(11): 2821-2834. DOI: 10.7544/issn1000-1239.202440415
Wang Yong, Xiong Yi, Yang Tianyu, Shen Yiran. A User Security Continuous Authentication Method for Earable Devices[J]. Journal of Computer Research and Development, 2024, 61(11): 2821-2834. DOI: 10.7544/issn1000-1239.202440415
Citation: Wang Yong, Xiong Yi, Yang Tianyu, Shen Yiran. A User Security Continuous Authentication Method for Earable Devices[J]. Journal of Computer Research and Development, 2024, 61(11): 2821-2834. DOI: 10.7544/issn1000-1239.202440415
王勇, 熊毅, 杨天宇, 沈益冉. 一种面向耳戴式设备的用户安全连续认证方法[J]. 计算机研究与发展, 2024, 61(11): 2821-2834. CSTR: 32373.14.issn1000-1239.202440415
引用本文: 王勇, 熊毅, 杨天宇, 沈益冉. 一种面向耳戴式设备的用户安全连续认证方法[J]. 计算机研究与发展, 2024, 61(11): 2821-2834. CSTR: 32373.14.issn1000-1239.202440415
Wang Yong, Xiong Yi, Yang Tianyu, Shen Yiran. A User Security Continuous Authentication Method for Earable Devices[J]. Journal of Computer Research and Development, 2024, 61(11): 2821-2834. CSTR: 32373.14.issn1000-1239.202440415
Citation: Wang Yong, Xiong Yi, Yang Tianyu, Shen Yiran. A User Security Continuous Authentication Method for Earable Devices[J]. Journal of Computer Research and Development, 2024, 61(11): 2821-2834. CSTR: 32373.14.issn1000-1239.202440415

一种面向耳戴式设备的用户安全连续认证方法

基金项目: 国家自然科学基金项目(61672179);教育部人文社会科学研究青年基金项目(20YJCZH172);中国博士后科学基金项目(2019M651262); 黑龙江省省属本科高校优秀青年教师基础研究支持计划资助(YQJH2023302)
详细信息
    作者简介:

    王勇: 1983年生. 博士,副教授. CCF高级会员. 主要研究方向为人工智能、隐私计算、物联网

    熊毅: 2003年生. 学士. 主要研究方向为智能可穿戴设备、智能物联网、机器学习

    杨天宇: 2002年生. 学士. 主要研究方向为智能可穿戴设备、智能物联网、机器学习

    沈益冉: 1987年生. 博士,教授. CCF会员,IEEE高级会员. 主要研究方向为移动计算、虚拟现实

    通讯作者:

    杨天宇(yangtianyu@hrbeu.edu.cn

  • 中图分类号: TP391

A User Security Continuous Authentication Method for Earable Devices

Funds: This work was supported by the National Natural Science Foundation of China (61672179), the Youth Fund Project of Humanities and Social Sciences Research of the Ministry of Education of China (20YJCZH172), the China Postdoctoral Science Foundation (2019M651262), and the Funding from the Basic Research Support Program for Outstanding Young Teachers in Provincial Undergraduate Universities in Heilongjiang Province(YQJH2023302).
More Information
    Author Bio:

    Wang Yong: born in 1983. PhD, associate professor. Senior member of CCF. His main research interests include artificial intelligence, privacy computing, and Internet of things

    Xiong Yi: born in 2003. Bachelor. His main research interests include smart wearable devices, intelligent Internet of things, and machine learning

    Yang Tianyu: born in 2002. Bachelor. His research interests include smart wearable devices, intelligent Internet of things, and machine learning

    Shen Yiran: born in 1987. PhD, professor. Member of CCF, senior member of IEEE. His main research interests include mobile computing and virtual reality

  • 摘要:

    耳戴式设备作为典型智能物联网边端感知设备应用场景众多,保护其合法用户隐私以及防止非法使用至关重要. 针对当前耳戴式设备用户身份认证方法受输入界面、传感器成本以及设备功耗等限制导致安全性不足、普适性不高以及用户体验不佳等问题,提出一种基于耳戴式设备内置惯性测量单元的用户身份认证方法,该方法通过采集用户执行面部交互手势所产生的振动信号来提取用户特异性信息,并基于上述信息的智能分析实现多样化的隐式用户连续身份认证. 为了提取精准可靠的用户特异性信息,提出了一种基于孪生网络的深度神经网络特征编码器,将同一用户的手势样本映射到特征空间中更近的位置,放大不同用户的手势样本之间的距离,实现用户特异性信息的有效编码. 对于基于用户特异性信息的用户身份连续认证,提出了一种基于单类支持向量机超平面距离的加权投票策略,能够自适应地优化判别边界来更好地捕捉蕴含的特征和结构,根据超平面内外样本点与超平面的距离决定该样本的置信程度,以此设计加权投票实现认证. 实验结果表明,所提方法在单次投票中实现了97.33%的认证准确率,7轮投票的连续认证后取得99.993%的认证准确率,优于对比的所有方法,无需密码的同时提供更流畅的用户体验和更高级别的安全性,具有较高的实际应用价值.

    Abstract:

    Earable devices are used as typical AIoT edge sensing devices. Protecting the privacy of legitimate users and preventing illegal use has become extremely important. In response to the current user authentication methods for earable devices, which are limited by input interfaces, sensor costs, and device power consumption, resulting in insufficient security, low universality, and poor user experience, a user authentication model based on the built-in inertial measurement unit (IMU) of earable devices is proposed. This model extracts user-specific information by collecting vibration signals generated by users performing facial interaction gestures, and achieves diversified implicit continuous user authentication based on intelligent analysis of the above information. To extract accurate and reliable user-specific information, a deep neural network feature encoder based on a Siamese network is proposed, which maps gesture samples of the same user closer in the feature space and enlarges the distance between gesture samples of different users, achieving effective encoding of user-specific information. For continuous user authentication based on user-specific information, a weighted voting strategy based on the distance of the one-class support vector machine hyperplane is proposed, which can adaptively optimize the discrimination boundary to better capture the contained features and structures. The confidence level of the sample is determined based on the distance of the sample points inside and outside the hyperplane, and a weighted voting is designed for authentication. Experimental results show that the method in this paper achieves an authentication accuracy of 97.33% in a single vote, and achieves an authentication accuracy of 99.993% after seven rounds of continuous authentication, which is better than all the methods compared in this paper. It provides a smoother user experience and a higher level of security without the need for passwords, and has a high practical application value.

  • 图  1   本文系统示意图

    Figure  1.   Illustration of our system

    图  2   7种面部交互手势

    Figure  2.   Seven facial interaction gestures

    图  3   孪生网络示意图

    Figure  3.   Illustration of Siamese network

    图  4   OCSVM训练示意图

    Figure  4.   Illustration of OCSVM training

    图  5   OCSVM身份认证示意图

    Figure  5.   Illustration of OCSVM authentification

    图  6   基于超平面的投票示意图

    Figure  6.   Illustration of hyperplane-based voting

    图  7   不同数量训练样本的用户分类准确率

    Figure  7.   User classification accuracy of different number of training samples

    图  8   不同数量训练样本的假阳率、假阴率、等错率

    Figure  8.   FPR, FNR, ERR of different number of samples

    图  9   不同投票轮次的连续认证的假阳率、假阴率、等错率

    Figure  9.   FNR, FPR, EER values for continuous authentication in different voting rounds

    表  1   相关文献所采用的方法

    Table  1   Methods Used in the Related References

    方法 传感器 生物信号 主要算法 连续认证 设备成本
    文献[12] 电容传感 手势 逻辑电路
    文献[13] IMU 手势 DNN
    文献[14] IMU 手势 域对抗
    文献[15] IMU 动作 机器学习
    文献[16] IMU 动作 RNN
    文献[17] IMU 头部震动 CNN
    文献[18] IMU 下颌震动 CNN
    文献[19] 麦克风 声波 机器学习
    文献[20] 麦克风 牙齿声纹 DNN
    文献[21] 麦克风 手势震动 孪生网络
    文献[22] 麦克风 耳内外声 线性回归
    文献[23] PPG 心脏活动 SVM
    文献[24] PPG 声波 残差门控网
    本文 IMU 手势 机器学习
    下载: 导出CSV

    表  2   20名志愿者的基本信息

    Table  2   Basic Information of the 20 Volunteers

    身高/cm 体重/kg 年龄 职业 性别
    160~170 170~180 180~190 50~60 60~70 70~80 18~22 22~25 25~28 在校学生 教职工 校外人士
    6人 12人 2人 4人 14人 2人 15人 4人 1人 19人 1人 0人 12人 8人
    下载: 导出CSV

    表  3   数据集格式

    Table  3   Data Set Format

    序号 加速度计x 加速度计y 加速度计z 陀螺仪x 陀螺仪y 陀螺仪z 标签
    手势1 0.964 0.972 −0.284 −0.265 0.053 0.057 −0.8 −4.9 0.1 0.7 −0.4 −0.1 0
    手势2 0.968 0.980 −0.261 −0.257 0.022 0.068 −1.6 −2.4 −1 0.3 −0.5 −0.3 1
    手势n 0.976 0.984 −0.261 −0.265 0.034 0.04 −0.8 2.2 0.4 −1.1 −0.1 −0.1 19
    下载: 导出CSV

    表  4   在智能手机上的部署测试

    Table  4   Deployment Testing on Smartphones

    系统模块 耗电量/mJ 内存消耗/MB 推理时间/ms
    特征编码器1.0642.132
    用户判别器0.2412.512
    总体1.3054.644
    下载: 导出CSV
  • [1]

    The Gates Notes LLC. AI is about to completely change how you use computers [EB/OL]. [2024-05-27]. https://www.gatesnotes.com/AI-agents

    [2]

    Apple Inc. Apple AirPods [EB/OL]. [2024-05-27]. https://www.apple.com. cn/airpods/

    [3] 华为终端有限公司. 华为Freebuds官网 [EB/OL]. [2024-05-27]. https://consumer.huawei.com/cn/audio/

    Huawei Terminal Co., Ltd. Huawei Freebuds official website [EB/OL]. [2024-05-27]. https://consumer.huawei.com/cn/audio/

    [4] 2024索尼(中国)有限公司. LinkBuds官网 [EB/OL]. [2024-05-27]. https://www.sonystyle.com.cn/products/headphone/index.html

    2024 Sony China Co., Ltd. LinkBuds official website [EB/OL]. [2024-05-27]. https://www.sonystyle.com.cn/products/headphone/index. html (in Chinese)

    [5] 张玉清,周威,彭安妮. 物联网安全综述[J]. 计算机研究与发展,2017,54(10):2130−2143 doi: 10.7544/issn1000-1239.2017.20170470

    Zhang Yuqing, Zhou Wei, Peng Anni. Survey of Internet of things security[J]. Journal of Computer Research and Development, 2017, 54(10): 2130−2143 (in Chinese) doi: 10.7544/issn1000-1239.2017.20170470

    [6] 周俊,沈华杰,林中允,等. 边缘计算隐私保护研究进展[J]. 计算机研究与发展,2020,57(10):2027−2051 doi: 10.7544/issn1000-1239.2020.20200614

    Zhou Jun, Shen Huajie, Lin Zhongyun, et al. Research advances on privacy preserving in edge computing[J]. Journal of Computer Research and Development, 2020, 57(10): 2027−2051 (in Chinese) doi: 10.7544/issn1000-1239.2020.20200614

    [7] 董晓蕾. 物联网隐私保护研究进展[J]. 计算机研究与发展,2015,52(10):2341−2352 doi: 10.7544/issn1000-1239.2015.20150764

    Dong Xiaolei. Advances of privacy preservation in Internet of things[J]. Journal of Computer Research and Development, 2015, 52(10): 2341−2352 (in Chinese) doi: 10.7544/issn1000-1239.2015.20150764

    [8] 刘奇旭,靳泽,陈灿华,等. 物联网访问控制安全性综述[J]. 计算机研究与发展,2022,59(10):2190−2211 doi: 10.7544/issn1000-1239.20220510

    Liu Qixu, Jin Ze, Chen Canhua, et al. Survey on Internet of things access control security[J]. Journal of Computer Research and Development, 2022, 59(10): 2190−2211 (in Chinese) doi: 10.7544/issn1000-1239.20220510

    [9]

    Bromley J, Guyon I, LeCun Y, et al. Signature verification using a "Siamese" time delay neural network [C] // Proc of the 6th Int Conf on Neural Information Processing Systems (NIPS’93). San Francisco, CA: Morgan Kaufmann Publishers Inc., 1993: 737–744

    [10] 王会勇,唐士杰,丁勇,等. 生物特征识别模板保护综述[J]. 计算机研究与发展,2020,57(5):1003−1021 doi: 10.7544/issn1000-1239.2020.20190371

    Wang Huiyong, Tang Shijie, Ding Yong, et al. Survey on biometrics template protection[J]. Journal of Computer Research and Development, 2020, 57(5): 1003−1021 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190371

    [11]

    Tax D M, Duin R P. Support vector data description[J]. Machine Learning, 2004, 54: 45−66 doi: 10.1023/B:MACH.0000008084.60811.49

    [12]

    Lissermann R, Huber J, Hadjakos A, et al. EarPut: Augmenting ear-worn devices for ear-based interaction [C] // Proc of the 26th Australian Computer-Human Interaction Conf on Designing Futures: The Future of Design. New York: Association for Computing Machinery, 2014: 300−307

    [13]

    Xu Xuhai, Shi Haitian, Yi Xin, et al. EarBuddy: Enabling on-face interaction via wireless earbuds [C] // Proc of the 2020 CHI Conf on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2020: 1−14

    [14]

    Wang Yong, Yang Tianyu, Wang Chunxiao, et al. BudsAuth: Toward gesture-wise continuous user authentication through earbuds vibration sensing[J]. IEEE Internet of Things Journal, 2024, 11(12): 22007−22020 doi: 10.1109/JIOT.2024.3380811

    [15]

    Qamar N, Siddiqui N, Ehatisham-ul-Haq M, et al. An approach towards position-independent human activity recognition model based on wearable accelerometer sensor[J]. Procedia Computer Science, 2020, 177: 196−203 doi: 10.1016/j.procs.2020.10.028

    [16]

    Lu Chris Xiaoxuan, Du Bowen, Zhao Peijun, et al. Deepauth: In-situ authentication for smartwatches via deeply learned behavioural biometrics [C] // Proc of the 2018 ACM Int Symp on Wearable Computers. New York: Association for Computing Machinery, 2018: 204−207

    [17]

    Li Feng, Zhao Jiayi, Yang Huan, et al. VibHead: An authentication scheme for smart headsets through vibration[J]. ACM Transactions on Sensor Networks, 2024, 20(4): 1−12

    [18]

    Liu Jianwei, Song Wenfan, Shen Leming, et al. Secure user verification and continuous authentication via earphone IMU[J]. IEEE Transactions on Mobile Computing, 2023, 22(11): 6755−6769

    [19]

    Ma D, Ferlini A, Mascolo C. OESense: Employing occlusion effect for in-ear human sensing [C] //Proc of the 19th Annual Int Conf on Mobile Systems, Applications, and Services. New York: Association for Computing Machinery, 2021: 175−187

    [20]

    Wang Zi, Ren Yili, Chen Yingying, et al. ToothSonic: Earable authentication via acoustic toothprint [C] // Proc of ACM Int Conf on Interactive Mobile, Wearable and Ubiquitous Technologies. New York: Association for Computing Machinery, 2022: 1−24

    [21]

    Wang Zi, Wang Yilin, Yang Jie. EarSlide: A secure ear wearables biometric authentication based on acoustic fingerprint [C] // Proc of ACM Int Conf on Interactive Mobile, Wearable and Ubiquitous Technologies. New York: Association for Computing Machinery, 2024: 1−29

    [22]

    Hu Changshuo, Ma Xiao, Ma Dong, et al. Lightweight and non-invasive user authentication on earables [C] // Proc of the 24th Int Workshop on Mobile Computing Systems and Applications. Newport Beach, CA: Association for Computing Machinery, 2023: 36−41

    [23]

    Li Jiao, Liu Yang, Li Zhenjiang, et al. EarPass: Continuous user authentication with in-ear PPG [C] // Proc of the 2023 ACM Int Joint Conf on Pervasive and Ubiquitous Computing & the 2023 ACM Int Symp on Wearable Computing. Cancun, Quintana Roo, Mexico: Association for Computing Machinery, 2023: 327−332

    [24]

    Choi S, Yim Junghwan, Jin Yincheng, et al. EarPPG: Securing your identity with your ears [C] // Proc of the 28th Int Conf on Intelligent User Interfaces. Sydney, NSW: Association for Computing Machinery, 2023: 835−849

    [25] 王气洪,贾洪杰,黄龙霞,等. 联合数据增强的语义对比聚类[J]. 计算机研究与发展,2024,61(6):1511−1524 doi: 10.7544/issn1000-1239.202220995

    Wang Qihong, Jia Hongjie, Huang Longxia, et al. Semantic contrastive clustering with federated data augmentation[J]. Journal of Computer Research and Development, 2024, 61(6): 1511−1524 (in Chinese) doi: 10.7544/issn1000-1239.202220995

    [26] 黄学坚,马廷淮,王根生. 基于样本内外协同表示和自适应融合的多模态学习方法[J]. 计算机研究与发展,2024,61(5):1310−1324 doi: 10.7544/issn1000-1239.202330722

    Huang Xuejian, Ma Tinghuai, Wang Gensheng. Multimodal learning method based on intra- and inter-sample cooperative representation and adaptive fusion[J]. Journal of Computer Research and Development, 2024, 61(5): 1310−1324 (in Chinese) doi: 10.7544/issn1000-1239.202330722

    [27] 杜金明,孙媛媛,林鸿飞,等. 融入知识图谱和课程学习的对话情绪识别[J]. 计算机研究与发展,2024,61(5):1299−1309 doi: 10.7544/issn1000-1239.202220951

    Du Jinming, Sun Yuanyuan, Lin Hongfei, et al. Conversational emotion recognition incorporating knowledge graph and curriculum learning[J]. Journal of Computer Research and Development, 2024, 61(5): 1299−1309 (in Chinese) doi: 10.7544/issn1000-1239.202220951

    [28] 包涵,王意洁. 低跨云数据中心修复流量的纠删码的快速构造方法[J]. 计算机研究与发展,2023,60(10):2418−2439 doi: 10.7544/issn1000-1239.202220580

    Bao Han, Wang Yijie. A fast construction method of the erasure code with small cross-cloud data center repair traffic[J]. Journal of Computer Research and Development, 2023, 60(10): 2418−2439 (in Chinese) doi: 10.7544/issn1000-1239.202220580

    [29]

    Ganaie M A, Hu M, Malik A K, et al. Ensemble deep learning: A review[J]. Engineering Applications of Artificial Intelligence, 2022, 115: 105−151

    [30]

    Zhao Langcheng, Lyu Rui, Lin Qi, et al. mmArrhythmia: Contactless arrhythmia detection via mmWave sensing [C] // Proc of the 24th ACM Int Conf on Mobile Computing and Networking. New York: Association for Computing Machinery, 2024: 1−25

    [31]

    Wang Yuexin, Zheng Jie, Wang Danni, et al. Multi-objective planning model based on the soft voting ensemble learning algorithm [C] // Proc of the 2023 IEEE Int Conf on Electrical, Automation and Computer Engineering (ICEACE). Piscataway: IEEE, 2023: 1104−1107

    [32] 郭虎升,张洋,王文剑. 面向不同类型概念漂移的两阶段自适应集成学习方法[J]. 计算机研究与发展,2024,61(7):1799−1811 doi: 10.7544/issn1000-1239.202330452

    Guo Husheng, Zhang Yang, Wang Wenjian. Two-stage adaptive ensemble learning method for different types of concept drift[J]. Journal of Computer Research and Development, 2024, 61(7): 1799−1811 (in Chinese) doi: 10.7544/issn1000-1239.202330452

    [33]

    Xie Yadong, Li Fan, Wu Yue, et al. User authentication on farable devices via bone-conducted occlusion sounds[J]. IEEE Transactions on Dependable and Secure Computing, 2024, 21(4): 3704−3718 doi: 10.1109/TDSC.2023.3335368

    [34]

    Lee S, Choi W, Lee D H. The vibration knows who you are! A further analysis on usable authentication for smartwatch users[J]. Computers & Security, 2023, 125(C): 103−040

    [35]

    Wang Zi, Yang Jie. Ear wearable (earable) user authentication via acoustic toothprint [J/OL]. 2022 [2024-05-27]. https://api.semanticscholar.org/CorpusID:248218666

    [36]

    Xu Xiangyu, Yu Jiadi, Chen Yingying, et al. TouchPass: Towards behavior-irrelevant on-touch user authentication on smartphones leveraging vibrations [C] //Proc of the 26th Annual Int Conf on Mobile Computing and Networking (MobiCom’20). New York: Association for Computing Machinery, 2020: 1−13

  • 期刊类型引用(2)

    1. 聂萌瑶,刘鑫. 考虑最大通信量的物联网群体访问路由算法. 计算机仿真. 2024(02): 415-419 . 百度学术
    2. 陈淑平,李祎,何王全,漆锋滨. 胖树拓扑中高效实用的定制多播路由算法. 计算机研究与发展. 2022(12): 2689-2707 . 本站查看

    其他类型引用(0)

图(9)  /  表(4)
计量
  • 文章访问数:  134
  • HTML全文浏览量:  31
  • PDF下载量:  58
  • 被引次数: 2
出版历程
  • 收稿日期:  2024-05-30
  • 修回日期:  2024-09-05
  • 网络出版日期:  2024-09-12
  • 刊出日期:  2024-10-31

目录

    /

    返回文章
    返回