Abstract:
Earable devices are used as typical AIoT edge sensing devices. Protecting the privacy of legitimate users and preventing illegal use has become extremely important. In response to the current user authentication methods for earable devices, which are limited by input interfaces, sensor costs, and device power consumption, resulting in insufficient security, low universality, and poor user experience, a user authentication model based on the built-in inertial measurement unit (IMU) of earable devices is proposed. This model extracts user-specific information by collecting vibration signals generated by users performing facial interaction gestures, and achieves diversified implicit continuous user authentication based on intelligent analysis of the above information. To extract accurate and reliable user-specific information, a deep neural network feature encoder based on a Siamese network is proposed, which maps gesture samples of the same user closer in the feature space and enlarges the distance between gesture samples of different users, achieving effective encoding of user-specific information. For continuous user authentication based on user-specific information, a weighted voting strategy based on the distance of the one-class support vector machine hyperplane is proposed, which can adaptively optimize the discrimination boundary to better capture the contained features and structures. The confidence level of the sample is determined based on the distance of the sample points inside and outside the hyperplane, and a weighted voting is designed for authentication. Experimental results show that the method in this paper achieves an authentication accuracy of 97.33% in a single vote, and achieves an authentication accuracy of 99.993% after seven rounds of continuous authentication, which is better than all the methods compared in this paper. It provides a smoother user experience and a higher level of security without the need for passwords, and has a high practical application value.