2013 Vol. 50 No. 10
2013, 50(10): 2029-2043.
Abstract:
As one of the most popular Internet applications, it is relatively easy for tagging systems to become the targets of malicious attackers. In a tagging system, malicious users always annotate resources with tag spam for some reason, thereby misleading normal users from searching and sharing those resources. Existing work has already shown that the defense effect of tag spam would directly influence the quality of services provided by tagging systems. Based on this background, through investigating some existing research efforts, this paper gives the formal descriptions of attack styles, and summarizes and extracts the existing three types of methods to defend against tag spam: detection-based method, demotion-based method, and prevention-based method. As to each type of method, this paper further describes corresponding implementing algorithms in detail, and analyzes their advantages and disadvantages, such as efficiency and availability. At the end of this paper, the authors also exhibit the future research directions of defending against tag spam. According to the description of countermeasures against tag spam, this paper can help readers or beginners to get familiar with the situation of this new research field and help researchers to think out more and more novel, practical and effective countermeasures against tag spam.
As one of the most popular Internet applications, it is relatively easy for tagging systems to become the targets of malicious attackers. In a tagging system, malicious users always annotate resources with tag spam for some reason, thereby misleading normal users from searching and sharing those resources. Existing work has already shown that the defense effect of tag spam would directly influence the quality of services provided by tagging systems. Based on this background, through investigating some existing research efforts, this paper gives the formal descriptions of attack styles, and summarizes and extracts the existing three types of methods to defend against tag spam: detection-based method, demotion-based method, and prevention-based method. As to each type of method, this paper further describes corresponding implementing algorithms in detail, and analyzes their advantages and disadvantages, such as efficiency and availability. At the end of this paper, the authors also exhibit the future research directions of defending against tag spam. According to the description of countermeasures against tag spam, this paper can help readers or beginners to get familiar with the situation of this new research field and help researchers to think out more and more novel, practical and effective countermeasures against tag spam.
2013, 50(10): 2044-2058.
Abstract:
To deeply understand procedures of various network applications, and to automatically classify, recognize, trace and control them, protocol state machine that represents the application sessions have to be obtained in advance. A novel approach is presented to reversely infer protocol state machine from collected application layer data. Protocol state machine is derived with a method of error-correcting grammatical inference based on the state sequences that appear in the application sessions. To richly mine and bring into play the performance of error-collecting, a criterion of best-matching path is presented to solve the difficulty of path selection during the error-correcting process. A method with regard to abnormal indegree discrimination and pruning on the basis of statistical probability is proposed. Moreover, negative example sets with similar tokens are adopted to reinforce the error-collecting performance. In order to solve the state expansion during the reconstruction of the state machine, a simplifying measure to obtain a compact protocol state machine that expresses the internal operating mechanism of the protocol accurately is used based on state merging with removal of the identical token and model reduction with a similar behavioral semantic. The experiments conducted in a real network, containing a number of real applications with several application layer protocols, validate this method.
To deeply understand procedures of various network applications, and to automatically classify, recognize, trace and control them, protocol state machine that represents the application sessions have to be obtained in advance. A novel approach is presented to reversely infer protocol state machine from collected application layer data. Protocol state machine is derived with a method of error-correcting grammatical inference based on the state sequences that appear in the application sessions. To richly mine and bring into play the performance of error-collecting, a criterion of best-matching path is presented to solve the difficulty of path selection during the error-correcting process. A method with regard to abnormal indegree discrimination and pruning on the basis of statistical probability is proposed. Moreover, negative example sets with similar tokens are adopted to reinforce the error-collecting performance. In order to solve the state expansion during the reconstruction of the state machine, a simplifying measure to obtain a compact protocol state machine that expresses the internal operating mechanism of the protocol accurately is used based on state merging with removal of the identical token and model reduction with a similar behavioral semantic. The experiments conducted in a real network, containing a number of real applications with several application layer protocols, validate this method.
2013, 50(10): 2059-2069.
Abstract:
In 802.11 wireless networks, the nodes which belong to different networks are rational and selfish, which results in low fairness, low load-balance and low social efficiency of variable-width channel allocation mechanisms. In this paper, we study the variable-width channel allocation problem from a non-cooperative game-theoretic point of view in 802.11 wireless networks. Firstly, we model the variable-width channel allocation problem as a strategic game, and prove the existence of the Nash equilibrium (NE) strategy, and show the conditions that guarantee the variable-width channel allocation process converges to the NE state. Secondly, we propose an incentive mechanism based on payment to cope with the social inefficient problem of the NE strategy. The incentive mechanism influences the nodes' allocation behavior and enables the system to converge to the dominant strategy equilibrium (DSE) state, in which the performance of the whole system attains the global optimality in terms of system-wide aggregate throughput. Meanwhile, we consider and analyze the fairness and load-balance of both two strategies. Finally, we propose two variable-width channel allocation algorithms to achieve the NE and the DSE states. We have evaluated the efficiency of two algorithms and discussed the influence of the proposed schemes on the system-wide throughput. Simulation results show that the NE strategy achieves good fariness, and the DSE strategy works better than the NE strategy in terms of load-balance and social efficiency.
In 802.11 wireless networks, the nodes which belong to different networks are rational and selfish, which results in low fairness, low load-balance and low social efficiency of variable-width channel allocation mechanisms. In this paper, we study the variable-width channel allocation problem from a non-cooperative game-theoretic point of view in 802.11 wireless networks. Firstly, we model the variable-width channel allocation problem as a strategic game, and prove the existence of the Nash equilibrium (NE) strategy, and show the conditions that guarantee the variable-width channel allocation process converges to the NE state. Secondly, we propose an incentive mechanism based on payment to cope with the social inefficient problem of the NE strategy. The incentive mechanism influences the nodes' allocation behavior and enables the system to converge to the dominant strategy equilibrium (DSE) state, in which the performance of the whole system attains the global optimality in terms of system-wide aggregate throughput. Meanwhile, we consider and analyze the fairness and load-balance of both two strategies. Finally, we propose two variable-width channel allocation algorithms to achieve the NE and the DSE states. We have evaluated the efficiency of two algorithms and discussed the influence of the proposed schemes on the system-wide throughput. Simulation results show that the NE strategy achieves good fariness, and the DSE strategy works better than the NE strategy in terms of load-balance and social efficiency.
2013, 50(10): 2070-2081.
Abstract:
Remote binary attestation scheme of the trusted computing platform guarantees the integrity and hence the trustworthiness of the platform can be demonstrated to remote parties. However, as pointed out recently, the binary attestation has some shortcomings, particularly in applications. The major problem of the binary attestation is that it reveals the information about the configuration of a platform (hardware and software) or applications, which may lead to privacy issues, such as discrimination services, anonymity violations, etc. In order to solve the problems of platform configuration information leakage caused by the traditional binary attestation in the trusted computing environment, we propose a new privacy-preserving property-based attestation (PBA) scheme, which has flexible checking mechanisms of property certificate status, low computational cost and provable security in the random oracle model. By making use of the ideas of the verifier-local revocation and tracing signatures in the group signature, we present new certificate checking mechanisms, which include offline checking mechanism and online checking mechanism. Moreover, we design the model of the scheme, define the security model of the scheme, give concrete construction of the scheme in detail, and formally prove the security of this scheme in the random oracle model. We prove that this scheme satisfies the correctness, attestation unforgeability and configuration privacy. Finally, compared with other existing PBA schemes, the proposed PBA is more practical and efficient in both the computational cost and attestation length.
Remote binary attestation scheme of the trusted computing platform guarantees the integrity and hence the trustworthiness of the platform can be demonstrated to remote parties. However, as pointed out recently, the binary attestation has some shortcomings, particularly in applications. The major problem of the binary attestation is that it reveals the information about the configuration of a platform (hardware and software) or applications, which may lead to privacy issues, such as discrimination services, anonymity violations, etc. In order to solve the problems of platform configuration information leakage caused by the traditional binary attestation in the trusted computing environment, we propose a new privacy-preserving property-based attestation (PBA) scheme, which has flexible checking mechanisms of property certificate status, low computational cost and provable security in the random oracle model. By making use of the ideas of the verifier-local revocation and tracing signatures in the group signature, we present new certificate checking mechanisms, which include offline checking mechanism and online checking mechanism. Moreover, we design the model of the scheme, define the security model of the scheme, give concrete construction of the scheme in detail, and formally prove the security of this scheme in the random oracle model. We prove that this scheme satisfies the correctness, attestation unforgeability and configuration privacy. Finally, compared with other existing PBA schemes, the proposed PBA is more practical and efficient in both the computational cost and attestation length.
2013, 50(10): 2082-2091.
Abstract:
Since the integrity policy model has been proposed, its maturity has always been lower than that of the confidentiality policy model. The restriction is due to integrity level dividing and usability. In this paper, different kinds of integrity models are summarized from the point of practicability with their characteristics concluded. Based on the previous discussion, this paper presents a practical dynamic integrity protection model called DMIP. It simplifies the intricacy of integrity level dividing and solves the existing problems on practicability of current integrity models especially for Linux. The DMIP is designed to preserve the integrity of system from potential network-based attacks and local malicious files. From the usability of Linux, DMIP improves the current integrity protection models. The paper also shows the invariant and constraint of DMIP model and provides formalization proof in theory.
Since the integrity policy model has been proposed, its maturity has always been lower than that of the confidentiality policy model. The restriction is due to integrity level dividing and usability. In this paper, different kinds of integrity models are summarized from the point of practicability with their characteristics concluded. Based on the previous discussion, this paper presents a practical dynamic integrity protection model called DMIP. It simplifies the intricacy of integrity level dividing and solves the existing problems on practicability of current integrity models especially for Linux. The DMIP is designed to preserve the integrity of system from potential network-based attacks and local malicious files. From the usability of Linux, DMIP improves the current integrity protection models. The paper also shows the invariant and constraint of DMIP model and provides formalization proof in theory.
2013, 50(10): 2092-2099.
Abstract:
Recently, Wang et al. proposed a traitor tracing scheme based on bilinear map. They claimed that their scheme cloud achieve full collusion resistance, full revocation, full recoverability and black-box traceability, which is efficient in terms of the translation overhead and storage overhead in comparison with the previously proposed schemes. In this paper, we analyze their scheme and show that their scheme does not achieve full revocation. Then we modify their scheme and propose a new traitor tracing scheme based on bilinear map. In this scheme, we employ the polynomial function and the filter function as the basic means of constructing the traitor tracing procedures in order to minimize the storage, computational and communication costs. More importantly, when traitors are found, this scheme can safely revoke their private keys without updating the private keys of other receivers and deter the revoked users to recover the decryption key. Therefore, it can achieve full revocation, and thus overcomes the weakness in Wang et al.' scheme. The security of the proposed scheme is based on the difficult problems of solving bilinear discrete logarithm problem and decision Diffie-Hellman problem. The proof of security and analysis of performance show that the proposed scheme is secure and able to achieve full collusion resistance, full recoverability, black-box traceability and full revocation. Moreover, the scheme is better than Wang et al's scheme in terms of the storage, computation and communication costs.
Recently, Wang et al. proposed a traitor tracing scheme based on bilinear map. They claimed that their scheme cloud achieve full collusion resistance, full revocation, full recoverability and black-box traceability, which is efficient in terms of the translation overhead and storage overhead in comparison with the previously proposed schemes. In this paper, we analyze their scheme and show that their scheme does not achieve full revocation. Then we modify their scheme and propose a new traitor tracing scheme based on bilinear map. In this scheme, we employ the polynomial function and the filter function as the basic means of constructing the traitor tracing procedures in order to minimize the storage, computational and communication costs. More importantly, when traitors are found, this scheme can safely revoke their private keys without updating the private keys of other receivers and deter the revoked users to recover the decryption key. Therefore, it can achieve full revocation, and thus overcomes the weakness in Wang et al.' scheme. The security of the proposed scheme is based on the difficult problems of solving bilinear discrete logarithm problem and decision Diffie-Hellman problem. The proof of security and analysis of performance show that the proposed scheme is secure and able to achieve full collusion resistance, full recoverability, black-box traceability and full revocation. Moreover, the scheme is better than Wang et al's scheme in terms of the storage, computation and communication costs.
2013, 50(10): 2100-2108.
Abstract:
In the world facing severe threat of DDoS, finding the best countermeasure will raise the chance of survival. Defense effectiveness evaluation could help determining the best, thus it is an important part of countermeasure selecting. Current existing defense effectiveness evaluation works through comparing the attack effect before and after the deployment of defensive measures. Consequently, if the measure to be evaluated has been deployed, it needs to be removed, and then to be deployed again during the evaluation process. As a result, the cost of defense effectiveness evaluation is high. The cost can be reduced if the evaluation don't have to remove the defensive measure. In this paper, a defense effectiveness evaluation method without removing the defensive measure is proposed. Firstly, the DEM (defense effectiveness model) model is presented. It chooses indices in the perspective of normal user, which reduces the number of indices and the difficulty of measuring. Then, joined with artificial neural network, the DEM model is able to predict the attack effect before the deployment of countermeasures while the countermeasure has bean already deployed. After that, SSFNet, a network simulator, is incorporated to simulate a typical DDoS attack scenario. The result of the simulation not only validates the predictive ability of artificial neural network in DEM model, but also proves the proposed method to be correct.
In the world facing severe threat of DDoS, finding the best countermeasure will raise the chance of survival. Defense effectiveness evaluation could help determining the best, thus it is an important part of countermeasure selecting. Current existing defense effectiveness evaluation works through comparing the attack effect before and after the deployment of defensive measures. Consequently, if the measure to be evaluated has been deployed, it needs to be removed, and then to be deployed again during the evaluation process. As a result, the cost of defense effectiveness evaluation is high. The cost can be reduced if the evaluation don't have to remove the defensive measure. In this paper, a defense effectiveness evaluation method without removing the defensive measure is proposed. Firstly, the DEM (defense effectiveness model) model is presented. It chooses indices in the perspective of normal user, which reduces the number of indices and the difficulty of measuring. Then, joined with artificial neural network, the DEM model is able to predict the attack effect before the deployment of countermeasures while the countermeasure has bean already deployed. After that, SSFNet, a network simulator, is incorporated to simulate a typical DDoS attack scenario. The result of the simulation not only validates the predictive ability of artificial neural network in DEM model, but also proves the proposed method to be correct.
2013, 50(10): 2109-2116.
Abstract:
To address intra-group and inter-group communication issues arising from function heterogeneity in heterogeneous sensor networks (HSNs), the applications of public-key cryptosystem, especially identity-based encryption (IBE), is studied, and a key management protocol for HSNs based on multi-domain identity-based encryption (M-IBE) is proposed. In the protocol, one group of HSNs is analogized to one domain in M-IBE from a logical point of view. Before deployment, a trusted third party generates global public parameters for the HSN, selects public and private keys for each group, and extracts private key for each sensor within the group. After deployment, neighbor sensors within the same group set up shared-key through the exchange of sensor identity; neighbor sensors in different groups establish shared-key after getting authorized from cluster heads. The proposed protocol is composed of four parts: key material pre-distribution, shared-key establishment within group, shared-key agreement between two groups, and adding new sensors and removing sensors. The security analysis and performance evaluation show that the protocol has high security, which can resist against high-end sensors and low-end sensors capture attacks. It also has low storage requirements and constant connectivity probability. It can satisfy the demand for higher security application scenarios.
To address intra-group and inter-group communication issues arising from function heterogeneity in heterogeneous sensor networks (HSNs), the applications of public-key cryptosystem, especially identity-based encryption (IBE), is studied, and a key management protocol for HSNs based on multi-domain identity-based encryption (M-IBE) is proposed. In the protocol, one group of HSNs is analogized to one domain in M-IBE from a logical point of view. Before deployment, a trusted third party generates global public parameters for the HSN, selects public and private keys for each group, and extracts private key for each sensor within the group. After deployment, neighbor sensors within the same group set up shared-key through the exchange of sensor identity; neighbor sensors in different groups establish shared-key after getting authorized from cluster heads. The proposed protocol is composed of four parts: key material pre-distribution, shared-key establishment within group, shared-key agreement between two groups, and adding new sensors and removing sensors. The security analysis and performance evaluation show that the protocol has high security, which can resist against high-end sensors and low-end sensors capture attacks. It also has low storage requirements and constant connectivity probability. It can satisfy the demand for higher security application scenarios.
2013, 50(10): 2117-2125.
Abstract:
MIBS is a lightweight block cipher aimed at constrained resources such as RFID tags and sensor networks, which was proposed in CANS2009, by Izadi M. I. et al. There have been a few security analysis results about MIBS, such as differential analysis and linear analysis on reduced rounds of MIBS. In this paper, we give an integral attack on reduced rounds of MIBS. Firstly, a 5-round integral distinguisher of MIBS is given by considering the special property of round function. Secondly, we use the higher-order integral technology to extend the 5-round integral distinguisher by another 3-round which helps us get a better integral attack on MIBS. Finally, we attack 8-round, 9-round and 10-round MIBS using these distinguishers. Furthermore, we use partial sum technique to reduce the time complexity of the integral attack. We attack 8-round MIBS with the data complexity of 29.6 and time complexity of 235.6encryptions, attack 9-round MIBS with the data complexity of 237.6 and time complexity of 240encryptions, and attack 10-round MIBS with the data complexity of 261.6 and time complexity of 240encryptions. Moreover, the results of this paper can be applied to both MIBS-64 and MIBS-80. Finally, the higher-order integral technology can also be applied to other Feistel-SP type block cipher, which can improve the results of integral attacks.
MIBS is a lightweight block cipher aimed at constrained resources such as RFID tags and sensor networks, which was proposed in CANS2009, by Izadi M. I. et al. There have been a few security analysis results about MIBS, such as differential analysis and linear analysis on reduced rounds of MIBS. In this paper, we give an integral attack on reduced rounds of MIBS. Firstly, a 5-round integral distinguisher of MIBS is given by considering the special property of round function. Secondly, we use the higher-order integral technology to extend the 5-round integral distinguisher by another 3-round which helps us get a better integral attack on MIBS. Finally, we attack 8-round, 9-round and 10-round MIBS using these distinguishers. Furthermore, we use partial sum technique to reduce the time complexity of the integral attack. We attack 8-round MIBS with the data complexity of 29.6 and time complexity of 235.6encryptions, attack 9-round MIBS with the data complexity of 237.6 and time complexity of 240encryptions, and attack 10-round MIBS with the data complexity of 261.6 and time complexity of 240encryptions. Moreover, the results of this paper can be applied to both MIBS-64 and MIBS-80. Finally, the higher-order integral technology can also be applied to other Feistel-SP type block cipher, which can improve the results of integral attacks.
2013, 50(10): 2126-2132.
Abstract:
The cognitive radio (CR) is a newly proposed wireless communication technology to improve the spectrum utilization. Through channel sensing and learning, cognitive radio systems exploit the unused spectrums and occupy it so that the network can allocate the spectrum dynamically and improve the utilization of available spectrum. There are a number of research focusing on the single-channel cognitive radio systems, however the CR system with parallel channels is an unexplored area because of its complexity. In this paper, we conduct performance evaluation of the CR system with parallel channels was conduct by using stochastic network calculus (SNC). The data arrival process for all primary/secondary users and the service process of parallel channels are characterized by Markov modulated deterministic processes (MMDP). Then we formulate the data arrival process and service process as arrival curve and service curve in the SNC theory. The closed form solutions of stochastic performance boundary of the delay and backlog are obtained using the delay bound theorem and backlog theorem in SNC theory. Finally, we make extensive simulations to evaluate the correctness of the proposed model, and conduct performance analysis of the CR system with parallel channels with various parameter settings. The results show that our proposed framework can effectively model and analyze the CR system with parallel channels.
The cognitive radio (CR) is a newly proposed wireless communication technology to improve the spectrum utilization. Through channel sensing and learning, cognitive radio systems exploit the unused spectrums and occupy it so that the network can allocate the spectrum dynamically and improve the utilization of available spectrum. There are a number of research focusing on the single-channel cognitive radio systems, however the CR system with parallel channels is an unexplored area because of its complexity. In this paper, we conduct performance evaluation of the CR system with parallel channels was conduct by using stochastic network calculus (SNC). The data arrival process for all primary/secondary users and the service process of parallel channels are characterized by Markov modulated deterministic processes (MMDP). Then we formulate the data arrival process and service process as arrival curve and service curve in the SNC theory. The closed form solutions of stochastic performance boundary of the delay and backlog are obtained using the delay bound theorem and backlog theorem in SNC theory. Finally, we make extensive simulations to evaluate the correctness of the proposed model, and conduct performance analysis of the CR system with parallel channels with various parameter settings. The results show that our proposed framework can effectively model and analyze the CR system with parallel channels.
2013, 50(10): 2133-2139.
Abstract:
At present, with the rapid development of computer technology and network communication technology, the network security becomes more and more serious. An attacker can often infiltrate a seemingly well-guarded network system to promulgate threats using multi-step attacks by exploiting sequences of related vulnerabilities. And fortunately, the attack graphs are able to reveal such potential threats by enumerating all possible sequences of atomic attacks. Aiming at the problems that it is difficult to generate attack graphs for large network system, a scalable approach is proposed to generate the full attack graphs based on the in-depth analysis of the models' features of the network environment and the limitation of previous algorithms. Firstly, a novel modeling language AGML (Attack Graphs Modeling Language) is proposed, which describes the attack patterns and initial scenario. Secondly, a scalable approach is put forward to generate full attack graphs through the technologies of creating index for the attributes and instantiating attack patterns. Furthermore, the algorithm has been tested on simulated networks. The experimental result shows the approach could be applied to large-scale networks.
At present, with the rapid development of computer technology and network communication technology, the network security becomes more and more serious. An attacker can often infiltrate a seemingly well-guarded network system to promulgate threats using multi-step attacks by exploiting sequences of related vulnerabilities. And fortunately, the attack graphs are able to reveal such potential threats by enumerating all possible sequences of atomic attacks. Aiming at the problems that it is difficult to generate attack graphs for large network system, a scalable approach is proposed to generate the full attack graphs based on the in-depth analysis of the models' features of the network environment and the limitation of previous algorithms. Firstly, a novel modeling language AGML (Attack Graphs Modeling Language) is proposed, which describes the attack patterns and initial scenario. Secondly, a scalable approach is put forward to generate full attack graphs through the technologies of creating index for the attributes and instantiating attack patterns. Furthermore, the algorithm has been tested on simulated networks. The experimental result shows the approach could be applied to large-scale networks.
Abstract:
Many complex systems in the real world exist in the form of networks, such as social networks, biological networks, Web networks, etc, which are collectively named complex networks. The research of complex networks has attracted many researchers from different fields such as physics, mathematics, computer science, among others. One of the main problems in the study of complex network is the detection of community structure, i.e. the division of a network into groups of nodes with dense intra-connections and sparse inter-connections, which has recently triggered great interest. The ability to detect community structure has a large amount of usefulness in many aspects. Furthermore, community structure may provide insights in understanding some uncharacteristic properties of a complex network system. For instance, in the World Wide Web, community analysis has uncovered thematic clusters; in biochemical or neural networks, communities may be functional groups and separating the network into such groups could simplify functional analysis considerably. This paper reviews the background, the significance and the state-of-the-art in discovering (overlapping) communities. Also, it raises several open issues as the conclusion. Here we try to draw a comprehensive overview for this emerging scientific area, with the purpose of offering some beneficial suggestions for related researchers.
Many complex systems in the real world exist in the form of networks, such as social networks, biological networks, Web networks, etc, which are collectively named complex networks. The research of complex networks has attracted many researchers from different fields such as physics, mathematics, computer science, among others. One of the main problems in the study of complex network is the detection of community structure, i.e. the division of a network into groups of nodes with dense intra-connections and sparse inter-connections, which has recently triggered great interest. The ability to detect community structure has a large amount of usefulness in many aspects. Furthermore, community structure may provide insights in understanding some uncharacteristic properties of a complex network system. For instance, in the World Wide Web, community analysis has uncovered thematic clusters; in biochemical or neural networks, communities may be functional groups and separating the network into such groups could simplify functional analysis considerably. This paper reviews the background, the significance and the state-of-the-art in discovering (overlapping) communities. Also, it raises several open issues as the conclusion. Here we try to draw a comprehensive overview for this emerging scientific area, with the purpose of offering some beneficial suggestions for related researchers.
2013, 50(10): 2155-2175.
Abstract:
In microblogs contexts like Twitter, the number of content producers can easily reach tens of thousands and a large number of users participate in the discussion of the topic, for any given topic. While this large number can generate notable diversity and not all users are equally influential, it also makes finding the true influencers, those generally rated as interesting and authoritative on a given topic, challenging. In this paper, the influence of users is measured by random walks of multi-relational data in microblogs: repost, reply, copy, and read. As the uncertainty of copy and read, a new method is proposed to determine transition probabilities of uncertain relational networks. Moreover, the combined random walk is proposed for multi-relational influence network, considering both of the transition probabilities between the intra and inter of the network. Finally, influencers are classified into two types: multi-topical influencers and single-topical influencers. Experiments are conducted on a real dataset from Twitter containing about 0.26 million users and 2.7 million posts, and the results showed that the method in this paper is more effective than TwitterRank and other methods of discovering influencers. Also, the results show that the number of multi-topical influencers is far less than that of single-topical influencers, but the effect of influence is much stronger than that of single-topical influencers.
In microblogs contexts like Twitter, the number of content producers can easily reach tens of thousands and a large number of users participate in the discussion of the topic, for any given topic. While this large number can generate notable diversity and not all users are equally influential, it also makes finding the true influencers, those generally rated as interesting and authoritative on a given topic, challenging. In this paper, the influence of users is measured by random walks of multi-relational data in microblogs: repost, reply, copy, and read. As the uncertainty of copy and read, a new method is proposed to determine transition probabilities of uncertain relational networks. Moreover, the combined random walk is proposed for multi-relational influence network, considering both of the transition probabilities between the intra and inter of the network. Finally, influencers are classified into two types: multi-topical influencers and single-topical influencers. Experiments are conducted on a real dataset from Twitter containing about 0.26 million users and 2.7 million posts, and the results showed that the method in this paper is more effective than TwitterRank and other methods of discovering influencers. Also, the results show that the number of multi-topical influencers is far less than that of single-topical influencers, but the effect of influence is much stronger than that of single-topical influencers.
2013, 50(10): 2176-2184.
Abstract:
Reinforcement learning involves sequential decision making in model-free environments. The aim of the agent is to maximize the accumulated reward of acting in its environment over an extended period of time. Finding the optimal policy in direct RL may be very slow. To speed up converging, one often-used solution is the integration of learning with planning. In order to further improve the convergence time and convergence precision of the Dyna structure algorithm, an optimized Dyna structure algorithm with prioritized sweeping named Dyna-PS is proposed, and its proof of convergence in theory is given. The key idea of Dyna-PS is integrating prioritized sweeping method in Dyna architecture so as to update the states according to their priority functions in the planning part. Moreover, it omits the insignificant and unrelated states' updating which are often updated in traditional value iteration and policy iteration. Achieved experiment results show that the Dyna-PS algorithm has better convergence performance and robustness for state space growth when it is applied to the maze experiment scenario and a series of classical AI programming problems.
Reinforcement learning involves sequential decision making in model-free environments. The aim of the agent is to maximize the accumulated reward of acting in its environment over an extended period of time. Finding the optimal policy in direct RL may be very slow. To speed up converging, one often-used solution is the integration of learning with planning. In order to further improve the convergence time and convergence precision of the Dyna structure algorithm, an optimized Dyna structure algorithm with prioritized sweeping named Dyna-PS is proposed, and its proof of convergence in theory is given. The key idea of Dyna-PS is integrating prioritized sweeping method in Dyna architecture so as to update the states according to their priority functions in the planning part. Moreover, it omits the insignificant and unrelated states' updating which are often updated in traditional value iteration and policy iteration. Achieved experiment results show that the Dyna-PS algorithm has better convergence performance and robustness for state space growth when it is applied to the maze experiment scenario and a series of classical AI programming problems.
2013, 50(10): 2185-2194.
Abstract:
Aiming at the deficiencies of existing context-aware recommendation algorithms, this paper proposes a context-aware recommendation algorithm based on fuzzy C-means clustering. Firstly, the fuzzy C-means clustering algorithm is used to perform clustering of historical contextual information and produce clusters and membership matrix. Then contextual information for the active user is matched with the cluster of historical contextual information, and non-membership data, which accord with the condition, are mapped into membership data by using membership degree of clustering as a mapping coefficient. Finally, we choose ratings of membership users, which conform to the active contextual information, to generate recommendation for the user. Compared with the existing algorithms, the proposed algorithm can not only solve the problem of inaccuracy recommendation due to the change of user context, but also overcome traditional hard clustering by using fuzzy C-means clustering algorithm. Moreover, the problem of data sparseness caused by clustering is solved by using membership mapping function. The experiments are conducted on the MovieLens dataset and the effectiveness of the proposed algorithm is verified.
Aiming at the deficiencies of existing context-aware recommendation algorithms, this paper proposes a context-aware recommendation algorithm based on fuzzy C-means clustering. Firstly, the fuzzy C-means clustering algorithm is used to perform clustering of historical contextual information and produce clusters and membership matrix. Then contextual information for the active user is matched with the cluster of historical contextual information, and non-membership data, which accord with the condition, are mapped into membership data by using membership degree of clustering as a mapping coefficient. Finally, we choose ratings of membership users, which conform to the active contextual information, to generate recommendation for the user. Compared with the existing algorithms, the proposed algorithm can not only solve the problem of inaccuracy recommendation due to the change of user context, but also overcome traditional hard clustering by using fuzzy C-means clustering algorithm. Moreover, the problem of data sparseness caused by clustering is solved by using membership mapping function. The experiments are conducted on the MovieLens dataset and the effectiveness of the proposed algorithm is verified.
2013, 50(10): 2195-2205.
Abstract:
Network BBS has been a very important way for Web user to publish information. The discussion on hot topics of BBS affects people's view, attitude to physical world, and even enactment. So a series of problems are put forward: How to compute the network user's influence? What is the user's influence trend? How to compare user's influence in different topics? By the power law, a lot of users are the “long tail”. How to identify the influential users? Taking theme as unit, extracting the reply information among users, we build user cascade relation graph, in which the number of reply and the length of reply make up user's behavior features. In-degree and out-degree make up structural features. Based on the Pagerank algorithm, introducing user's behavior feature and the user relational networks, this paper designs a multiple attributes rank (MAR) algorithm. Influential user's evolution trend is analyzed by time span division, based on user's interest difference on topic at different time. To illustrate these problems, we have conducted experiments with data from Tianya BBS, and evaluated multi-facets of issues of identifying influential users and MAR algorithm. We have summarized some interesting findings and expec ted the future work.
Network BBS has been a very important way for Web user to publish information. The discussion on hot topics of BBS affects people's view, attitude to physical world, and even enactment. So a series of problems are put forward: How to compute the network user's influence? What is the user's influence trend? How to compare user's influence in different topics? By the power law, a lot of users are the “long tail”. How to identify the influential users? Taking theme as unit, extracting the reply information among users, we build user cascade relation graph, in which the number of reply and the length of reply make up user's behavior features. In-degree and out-degree make up structural features. Based on the Pagerank algorithm, introducing user's behavior feature and the user relational networks, this paper designs a multiple attributes rank (MAR) algorithm. Influential user's evolution trend is analyzed by time span division, based on user's interest difference on topic at different time. To illustrate these problems, we have conducted experiments with data from Tianya BBS, and evaluated multi-facets of issues of identifying influential users and MAR algorithm. We have summarized some interesting findings and expec ted the future work.
2013, 50(10): 2206-2211.
Abstract:
Using a cart-curved table model,a running mode based on ZMP for humanoid robot is presented. Trajectory of center of mass and foot trajectory for humanoid robot are planned in support and flight phase, respectively. Trajectory of center of mass for humanoid robot is planned by solving dynamic equation based on cart-curved table model in support phase. Center of mass for humanoid robot is parapolic movement in flight phase and the trajectory of center of mass is regarded as constant velocity on the horizontal direction and free fall on the vertical direction. The constraints of force and torque when feet are contact with the ground are analyzed. Adjusting inclination degree of upper body for humanoid robot maintains its dynamic equilibrium. At the same time, joint torque values of all joints are solved according to the dynamic equation of humanoid robot. The simulation result shows that the angle values and joint torque values of each joint change smoothly and humanoid robot is able to run stably. It is verified that our proposed method is satisfactory.
Using a cart-curved table model,a running mode based on ZMP for humanoid robot is presented. Trajectory of center of mass and foot trajectory for humanoid robot are planned in support and flight phase, respectively. Trajectory of center of mass for humanoid robot is planned by solving dynamic equation based on cart-curved table model in support phase. Center of mass for humanoid robot is parapolic movement in flight phase and the trajectory of center of mass is regarded as constant velocity on the horizontal direction and free fall on the vertical direction. The constraints of force and torque when feet are contact with the ground are analyzed. Adjusting inclination degree of upper body for humanoid robot maintains its dynamic equilibrium. At the same time, joint torque values of all joints are solved according to the dynamic equation of humanoid robot. The simulation result shows that the angle values and joint torque values of each joint change smoothly and humanoid robot is able to run stably. It is verified that our proposed method is satisfactory.
2013, 50(10): 2212-2227.
Abstract:
Chip multi-processor (CMP) is so prevalent that it has become the mainstream of high performance processors. CMP improves the throughput of processor, since it executes multiple threads simultaneously. Nevertheless, as some resources on chip are shared by all threads, there are new problems that does not exist in previous single-processor systems. The contention of shared resources between multiple threads executed on different cores can significantly influence the quality of service of some threads and degrade the overall performance of the system. As a result, how to schedule the shared resources (shared cache and main memory are two of the most important shared resources in CMP) fairly becomes a serious problem for CMP. In this paper, we summarize the recent studies of shared resource scheduling. Firstly, we introduce a series of cache partitioning algorithms to improve the throughput and fairness of shared cache, including some novel cache partitioning techniques that overcome the limitation of coarse-grain allocations. Then, we review some investigations on memory access scheduling from different perspectives by considering row-buffer locality, bank level parallelism, and so on. After that, we study some coordinated mechanisms that provide performance improvement of the entire system. Finally, potential research issues and future work are also introduced.
Chip multi-processor (CMP) is so prevalent that it has become the mainstream of high performance processors. CMP improves the throughput of processor, since it executes multiple threads simultaneously. Nevertheless, as some resources on chip are shared by all threads, there are new problems that does not exist in previous single-processor systems. The contention of shared resources between multiple threads executed on different cores can significantly influence the quality of service of some threads and degrade the overall performance of the system. As a result, how to schedule the shared resources (shared cache and main memory are two of the most important shared resources in CMP) fairly becomes a serious problem for CMP. In this paper, we summarize the recent studies of shared resource scheduling. Firstly, we introduce a series of cache partitioning algorithms to improve the throughput and fairness of shared cache, including some novel cache partitioning techniques that overcome the limitation of coarse-grain allocations. Then, we review some investigations on memory access scheduling from different perspectives by considering row-buffer locality, bank level parallelism, and so on. After that, we study some coordinated mechanisms that provide performance improvement of the entire system. Finally, potential research issues and future work are also introduced.
2013, 50(10): 2228-2238.
Abstract:
Modern computers rely on branch prediction to boost instruction-level parallelism. Theoretically, branch predictors with more complicated algorithms and larger data structures provide more accurate branch predictions. Unfortunately, overly large structures and excessively complicated algorithms cannot be implemented in reality because of their overlong response time. Up to now, many strategies have been proposed to trade-off delay versus accuracy, but none of them has been accepted by industry institutes. Ahead branch prediction architecture (ABPA) presents a short table with branch-target pairs in the front end of the pipeline, which makes the branch prediction brief enough even for some aggressive processors. At the same time, hybrid algorithms and data structures for accurate prediction are all moved to the back end of the pipeline. An effective mechanism for branch prediction ahead in the back end and updating the branch-target table in the front end in time is proposed, and an indirect branch prediction algorithm based on branch history and target path (BHTP) is also implemented in ABPA. Experimental results with SPEC benchmarks on Gem5/SimpleScalar show that ABPA improves branch prediction rate by 4.72% compared with a commonly used branch target buffer-based predictor. In addition, the mispredictions for indirect branches under hybrid algorithms with BHTP reduce by an average of 32% compared with the traditional algorithms.
Modern computers rely on branch prediction to boost instruction-level parallelism. Theoretically, branch predictors with more complicated algorithms and larger data structures provide more accurate branch predictions. Unfortunately, overly large structures and excessively complicated algorithms cannot be implemented in reality because of their overlong response time. Up to now, many strategies have been proposed to trade-off delay versus accuracy, but none of them has been accepted by industry institutes. Ahead branch prediction architecture (ABPA) presents a short table with branch-target pairs in the front end of the pipeline, which makes the branch prediction brief enough even for some aggressive processors. At the same time, hybrid algorithms and data structures for accurate prediction are all moved to the back end of the pipeline. An effective mechanism for branch prediction ahead in the back end and updating the branch-target table in the front end in time is proposed, and an indirect branch prediction algorithm based on branch history and target path (BHTP) is also implemented in ABPA. Experimental results with SPEC benchmarks on Gem5/SimpleScalar show that ABPA improves branch prediction rate by 4.72% compared with a commonly used branch target buffer-based predictor. In addition, the mispredictions for indirect branches under hybrid algorithms with BHTP reduce by an average of 32% compared with the traditional algorithms.
2013, 50(10): 2239-2246.
Abstract:
The variable-length ISA and instruction compression technology can overcome the drawback of traditional very long instruction word (VLIW) architectures. It can address the problem of low density of long instruction word by arranging the instruction word in the instruction cache with high density. However, the instruction compression technology results in separate arrangement of long instruction word into two cache lines, which makes the instruction word cannot be fetched and issued simultaneously and becomes the performance bottleneck of VLIW architecture processors. A novel high-performance variable-length instruction issue window mechanism is proposed in this paper. It solves the instruction fetch and issue problem in separating instruction words. It provides more effective and continuous instruction flow, and temporarily stores one iteration of the loop body to support software pipeline technology, which effectively improves the performance of VLIW DSP processors. By establishing the cycle-accurate processor simulator, simulation experiment is carried out based on DSP/IMG library. Experimental results show that the average performance is improved about 21.89% by adopting the proposed method. Under the TSMC 65nm technology, the area and power of the proposed mechanism increase by 0.98% and 0.76% respectively, compared with that of the core. It is suit able for VLIW architectures that have adopted instruction compression and variable-length ISA technology.
The variable-length ISA and instruction compression technology can overcome the drawback of traditional very long instruction word (VLIW) architectures. It can address the problem of low density of long instruction word by arranging the instruction word in the instruction cache with high density. However, the instruction compression technology results in separate arrangement of long instruction word into two cache lines, which makes the instruction word cannot be fetched and issued simultaneously and becomes the performance bottleneck of VLIW architecture processors. A novel high-performance variable-length instruction issue window mechanism is proposed in this paper. It solves the instruction fetch and issue problem in separating instruction words. It provides more effective and continuous instruction flow, and temporarily stores one iteration of the loop body to support software pipeline technology, which effectively improves the performance of VLIW DSP processors. By establishing the cycle-accurate processor simulator, simulation experiment is carried out based on DSP/IMG library. Experimental results show that the average performance is improved about 21.89% by adopting the proposed method. Under the TSMC 65nm technology, the area and power of the proposed mechanism increase by 0.98% and 0.76% respectively, compared with that of the core. It is suit able for VLIW architectures that have adopted instruction compression and variable-length ISA technology.
2013, 50(10): 2247-2252.
Abstract:
Memory virtualization is one of the most important methods to effectively abstract, utilize and separate computer's physic memory, and it decides overall performance of system virtualization. However, the traditional software-based methods often suffer from the inefficiency and complexity. The traditional hardware-assisted methods require the unavoidable re-design of the processor architecture. This paper presents a novel hardware-software co-designed method to accelerate the memory virtualization method on MIPS architecture processor. It improves the system performance without increasing any other hardware. This paper introduces MLASM (multiple layer address space model), which not only fills memory virtualization hole in MIPS architecture processor, but also enhances performance on the basis of the existing memory virtualization method. Meanwhile, this paper introduces (translation lookaside buffer, TLB) share method based on multiple layer address space, reduces the expenses when virtual machines are switching. Finally, a system virtual machine called VIRT-LOONGSON on MIPS architecture processor LOONGSON-3 is implemented. Performance evaluation shows that the proposed method can speedup most benchmark programs by nearly 3 to 5 times compared with binary translation method, and improves 5% to 16% performance compared with TLB simulation method.
Memory virtualization is one of the most important methods to effectively abstract, utilize and separate computer's physic memory, and it decides overall performance of system virtualization. However, the traditional software-based methods often suffer from the inefficiency and complexity. The traditional hardware-assisted methods require the unavoidable re-design of the processor architecture. This paper presents a novel hardware-software co-designed method to accelerate the memory virtualization method on MIPS architecture processor. It improves the system performance without increasing any other hardware. This paper introduces MLASM (multiple layer address space model), which not only fills memory virtualization hole in MIPS architecture processor, but also enhances performance on the basis of the existing memory virtualization method. Meanwhile, this paper introduces (translation lookaside buffer, TLB) share method based on multiple layer address space, reduces the expenses when virtual machines are switching. Finally, a system virtual machine called VIRT-LOONGSON on MIPS architecture processor LOONGSON-3 is implemented. Performance evaluation shows that the proposed method can speedup most benchmark programs by nearly 3 to 5 times compared with binary translation method, and improves 5% to 16% performance compared with TLB simulation method.