ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 August 2011, Volume 48 Issue 8
Survey of Botnets
Fang Binxing, Cui Xiang, and Wang Wei
2011, 48(8):  1315-1331. 
Asbtract ( 729 )   HTML ( 9)   PDF (2257KB) ( 785 )  
Related Articles | Metrics
In recent years, botnets, evolving from traditional worms and Trojans, have become one of the most effective platforms for many Internet attacks. Botnets have even become a powerful weapon for cyberwarfare. Therefore, as defenders, we should pay more attention to botnets—both current research findings and their evolution trends. In this paper, we divide the evolution of botnets into five phases and analyze their characteristics and corresponding representative botnets in each phase. To describe botnets unambiguously, we define botnets formally and classify botnets into four classes based on topology structures. In order to have an overall perspective of current research works, we divide them into five fields: detection, tracking, measurement, prediction, countermeasures, and analyze each field in detail. Based on the comprehensive study of the development law of botnet attacks and defense, we exact several inescapable weaknesses inside botnets, which could be exploited to defend against botnets. To conclude the paper, we suggest possible countermeasures against botnets and predict possible evolution trends of botnets.
Research on Trusted Computing Technology
Feng Dengguo, Qin Yu, Wang Dan, and Chu Xiaobo
2011, 48(8):  1332-1349. 
Asbtract ( 1141 )   HTML ( 13)   PDF (2535KB) ( 1939 )  
Related Articles | Metrics
Trusted computing, as a novel technology of information security, has become an important research area of information security. TCG comprised of the international IT giants has published a series of trusted computing specifications to promote the comprehensive development of the trusted computing technology and industry, and the core specifications have been accepted as international standardization by ISO/IEC. In academia, the research institutions at home and abroad study the trusted computing technology in depth and have gained rich achievements. In China, the independent trusted computing standard infrastructure is founded with the core of TCM on the basis of the independent cryptography algorithms, forming the whole trusted computing industry chains, which breaks the monopolization of the trusted computing technology and industry by the international IT giants. With the rapid development of trusted computing field, there are still lots of problems on the key technologies to be solved, and the related research has been done in succession recently. This paper comprehensively illustrates our research results on trusted computing technology. Beginning with establishing the trust of the terminal platforms, we propose a trustworthiness-based trust model and give a method of building trust chain dynamically with information flow, which ensure the real time and security protection of the trust establishment to some extent. Aiming at the security and efficiency problems of the remote attestation protocols, we propose the first property-based attestation scheme on bilinear map and the first direct anonymous attestation scheme based on the q-SDH assumption from the bilinear maps. In trusted computing testing and evaluation, we propose a method of generating test cases automatically with EFSM, and from the method develop a trusted computing platform testing and evaluation system which is the first to be applied in China practically.
An Efficient ID-Based Verifiably Encrypted Signature Scheme
Zhou Yousheng, Sun Yanbin, Qing Sihan,, and Yang Yixian
2011, 48(8):  1350-1356. 
Asbtract ( 751 )   HTML ( 0)   PDF (729KB) ( 393 )  
Related Articles | Metrics
Verifiably encrypted signature is useful in handling the fair exchange problem, especially online contract signing. A new ID-based verifiably encrypted signature scheme is proposed based on Shim signature scheme. The new scheme does not use any zero-knowledge proofs to provide verifiability, thus eliminates some computation burden from complicated interaction. The creation of verifiably encrypted signature in the scheme is realized by adding a value into one parameter of Shim signature. The verification of verifiably encrypted signature in the scheme is implemented by multiplying one pairing value with the right part of verification equation in Shim signature. Taking account of above reasons, the design of the proposed scheme is compact. The new scheme is provably secure in the random oracle model under the CDH problem assumption. The analysis results show that the presented scheme needs smaller communication requirements and its computation complexity is more optimized compared with the previous ID-based verifiably encrypted signature schemes. ID-based public key cryptography has become a good alternative for certificate based public key setting, especially when efficient key management and moderate security are required. Our new verifiably encrypted signature scheme is an entirely ID-based scheme, which provides an efficient primitive for building fair exchange protocols in ID-based public key cryptosystem.
A New Construction of Short Hierarchical Identity-Based Signature in the Standard Model
Wu Qing, Zhang Leyou, and Hu Yupu
2011, 48(8):  1357-1362. 
Asbtract ( 410 )   HTML ( 0)   PDF (658KB) ( 383 )  
Related Articles | Metrics
Hierarchical identity based signature (HIBS) has wide application in large scale networks. However, the existing work cannot solve the trade-off between security and efficiency. The main challenge at present is to construct a high efficient and strongly secure HIBS with low computation cost. To overcome the drawbacks in the previous work, a new hierarchical identity-based signature scheme is introduced. The proposed scheme has some advantages over the available. For examples, the private keys size shrinks as the identity depth increases, the signature only consists of three group elements and three bilinear pairs are needed in verifying algorithm, which are independent of hierarchy depth. Furthermore, the security of the new scheme is based on the general selective-identity security model(Gs-ID) which is a general security model based on full security model and selective identity model. Under the h-computational Diffie-Hellman exponent problem (h-CDH) assumption, our scheme is proven to be secure against Gs-ID and adaptive chosen message attack. In addition, the security analysis does not rely on the random oracles. The assumption in our scheme is more natural than many of the hardness assumptions recently introduced to HIBS in the standard model, which solves the trade-off between the security and computation efficiency.
Multiple-Authority-Key Functional Encryption: A M-KP-ABE Scheme Based on LMSSS
Yang Xiaoyuan, Cai Weiyi, and Chen Haibin
2011, 48(8):  1363-1369. 
Asbtract ( 913 )   HTML ( 1)   PDF (759KB) ( 472 )  
Related Articles | Metrics
Functional encryption opens up a much larger world of possibilities for sharing encrypted data. It is sufficient for many emerging applications. Some recent work aimed at constructing different types of fine-grained encryption systems which could be cast in the framework of functional encryption,such as IBE,ABE,PE,but they only focused on the systems that supported single-authority-key functionality. We extend functional encryption to multiple-authority-key functional encryption,which can provide more sophisticated and flexible functionality. This system allows an encryptor to specify a policy and a capability by describing what users can learn from the ciphertext. The policies are similar to what were defined in the previous systems and the capabilities are expressed as different kinds of authority keys. This paper gives a security model for a class of multiple-authority-key functional encryption, multiple-authority-key KP-ABE. A new KP-ABE scheme,which supports functionalities taken in multiple authority keys,is proposed in the given security model. Our techniques allow for any attribute access the structure expressed by a linear multi-secret sharing scheme (LMSSS) matrix M. Based on the assumption of DBDH,this scheme is proven to be selectively secure in the standard model under chosen plaintext attack. It is easy to derive the single-authority-key scheme from the multiple-authority-key scheme and construct fine-grained tree-access structure. The computational cost of our scheme is equal to the single-authority-key scheme,which makes it more appropriate in many practical applications.
Analysis for Probabilistic and Timed Information Flow Security Properties via ptSPA
Li Chao, Yin Lihua, and Guo Yunchuan,
2011, 48(8):  1370-1380. 
Asbtract ( 465 )   HTML ( 0)   PDF (1211KB) ( 458 )  
Related Articles | Metrics
Information flow analysis is one of the primary approaches for analyzing computer security problems, such as covert channel and data leak. Security process algebra(SPA) is a classic method used in researching information security properties in nondeterministic system. To analyze the information security properties and capture the information leakage in probabilistic and real time system, SPA is extended in probabilistic and time setting. Probabilistic and time security process algebra(ptSPA), an extension of SPA, is proposed by first introducing the probabilistic and time calculus to specify activities constricted by probability and time, and extending the formal syntax and semantic to enhance the SPA. Then, the related notion of probabilistic and time bisimulation equivalence is presented, to describe two observable behavior traces are equivalent when considering probability and time. Based on ptSPA, several information flow security properties in probabilistic and real time system, such as TBSPNI, PTBNDC and SPTBNDC are defined, which can capture probabilistic and time covert channel while the original SPA cannot. The relations among the information flow security properties are analyzed. Finally, a case study is given to show that the expressiveness of calculus in ptSPA is possible to model and analyze probabilistic and time systems.
Algorithmic Complexity Attacks Against WuManber
Zhang Yu, Liu Ping, Liu Yanbing, Tan Jianlong, and Guo Li
2011, 48(8):  1381-1389. 
Asbtract ( 632 )   HTML ( 0)   PDF (1892KB) ( 383 )  
Related Articles | Metrics
Pattern matching is one of the fundamental problems in computer science. It is the key problem of many important scopes such as network information security, information retrieval and filtration, computational biology, etc. A great number of security problems have arisen with the wide application of pattern matching, especially in network information security systems, such as intrusion detection and prevention systems, anti-virus systems, anti-spam systems, firewall, etc. A method of algorithmic complexity attacks against WuManber which is a classical multi-patterrn algorithm, and the optimal solution of creating the attacking data are presented in this article. Compared with random data and the data from network, the attacking data can greatly reduce the searching speed of WuManber. Experiments on random data sets show that the larger character set, the better attacking performance. And experiments on the data from the network show that the attacking data can reduce WuManber searching speed by more than 50%. It is found that if a small part of the pattern set is known, the attack data can be created. Defensively speaking, it is important that the pattern set must be kept secret. This article also provides some strategies of improving the security of network information security systems. The attacking data can also be used in checking the system security.
A Modular Approach Towards Design and Analysis of Authenticated Key Exchange Protocol Based on Extended Canetti-Krawczyk Model
Pan Jiaxin and Wang Libin
2011, 48(8):  1390-1399. 
Asbtract ( 673 )   HTML ( 0)   PDF (1024KB) ( 362 )  
Related Articles | Metrics
We propose a modular extended Canetti-Krawczyk (eCK) named as meCK, in order to avoid the controversial random oracle assumption in the security proof of authenticated key exchange (AKE) protocols. Our model treats the AKE protocol as a secret exchange module and a key derivation module, and formalizes the adversarial capabilities and security properties. By composing the security of these two modules, we have the modular model and prove that it is stronger than the original eCK model. With the help of the modular approach, an efficient AKE protocol named as UPS is designed. UPS is provably meCK-secure under the existence of pseudo-random function family, target collision-resistant hash function family and the hardness of Gap Diffie-Hellman problem. Compared with the related works in standard model, UPS requires weaker and more standard cryptographic assumptions, and reduces 50%—67% group exponentiations. Finally, the design and security proof of UPS validate the effectiveness of our model, and solve an open problem in ProvSec09.
Ownership Transfer Protocol for RFID Tag
Jin Yongming, Sun Huiping, Guan Zhi, and Chen Zhong
2011, 48(8):  1400-1405. 
Asbtract ( 589 )   HTML ( 2)   PDF (919KB) ( 606 )  
Related Articles | Metrics
Radio frequency identification (RFID) is becoming ubiquitous at present, which is suitable for ubiquitous computing environment. Its flexibility holds great promise for novel applications, and increasingly RFID tags are being deployed. RFID security and privacy is the basic requirement in most applications. Conventional security primitives cannot be integrated in RFID tags as they have inadequate computation capabilities with extremely limited resources. Therefore, it is important to have some lightweight security mechanisms suitable for RFID tags. A tagged item will be often passed to a new owner. The privacy of the new (as well as the past) owner should be guaranteed. Based on SQUASH—a message authentication code (MAC) with provable security properties for highly constrained devices, a new lightweight ownership transfer protocol is proposed. The new protocol has more efficient performance than other Hash-based schemes. Moreover, it achieves very strong notion of security and it is the first protocol based on SQUASH whose security properties have been proven. Finally, the new protocol can protect the forward and backward privacy of both parties that is very important requirement in the ownership transfer application.
A Secure Routing Protocol Based on D-S Evidence Theory in Ad Hoc Networks
Li Xiaoqing, Li Hui, Yang Kai, and Ma Jianfeng
2011, 48(8):  1406-1413. 
Asbtract ( 442 )   HTML ( 1)   PDF (983KB) ( 411 )  
Related Articles | Metrics
We present an anonymous and efficiently routing protocol AE-ARAN for MANET based on Dempster-Shafer(D-S) evidence theory. A trust evaluation model which is a system of trust valuation is given. It could discover malicious actions in networks dynamically and timely and isolate them efficiently, and resist attacks to some extent and improve the reliability, robustness and security of the over all network effectively. So it could provide a secure network environment for routing establishment. Meanwhile, the ARAN is modified and we present the anonymous identity of nodes by the way of the Hash routing, which stored in routing table. It can effectively ensure the anonymous of the networks and avoid launching a routing request repeatedly. Based on the Diffie-Hellman key agreement arithmetic, the protocol set up the key agreement at the time of routing. Theoritical analysis and simulation results show that the proposed scheme can detect and isolate malicious nodes efficiently and resist attacks. The protocol can achieve the anonymous and improve the efficiency and security of the all network.
TRBAC: Trust Based Access Control Model
Liu Wu, Duan Haixin, Zhang Hong, Ren Ping, and Wu Jianping
2011, 48(8):  1414-1420. 
Asbtract ( 974 )   HTML ( 4)   PDF (1081KB) ( 670 )  
Related Articles | Metrics
Access control is a process which controls users to execute some operations or access some network resources according to the users identity or attribution. The discretionary access control and mandatory access control are two main access control modes which are broadly used in secure operating systems. Discretionary access control is based on user identity andor groups and mandatory access control is usually based on sensitivity labels. Neither of these two modes can completely satisfy the requirements of all access control. Discretionary access control is too loose to restrict the propagation of privileges while mandatory access control is too rigid to use flexibly. This paper analyzes current access control models, and extends the RBAC (role based access control) model aiming at its deficiency, and based on which we propose a trust based access control model (TRBAC). The TRBAC model can provide more security, flexible and fine-grained dynamic access control mechanism, and therefore improve both the security and the reliability of authorization mechanism.
Formal Security Analysis on Trusted Platform Module Based on Applied π Calculus
Xu Shiwei, and Zhang Huanguo,
2011, 48(8):  1421-1429. 
Asbtract ( 534 )   HTML ( 1)   PDF (935KB) ( 357 )  
Related Articles | Metrics
Trusted computing is a popular paradigm to enable computers to achieve a greater level of security than is possible in software alone. The core hardware component, named trusted platform module (TPM), is designed to achieve this goal. As there are many relevant products on the market and some attacks have already been found, it is very necessary to carry out the security analysis of TPM. However, because TPM is a complex security component whose specification is too long to be completely covered, the existing work hasnt formally or fully analyzed the issues about the various security properties. An applied π calculus model of the interactive processes between TPM and its user is proposed as a solid foundation for discussing the various security properties. Based on the formal model, the secrecy, authentication and weak secrecy problems of TPM are identified and formalized, some known attacks are rediscovered and some new attacks are found via a formal way rather than descriptions in natural language. Furthermore, the fixing proposals for the corresponding security vulnerabilities are presented and verified. In order to make the analysis and verification more accurate and efficient, the automatic theorem proving tool is also used during the analysis and verification.
Static Analysis of TOCTTOU Vulnerabilities in Unix-Style File System
Han Wei, and He Yeping
2011, 48(8):  1430-1437. 
Asbtract ( 782 )   HTML ( 0)   PDF (832KB) ( 385 )  
Related Articles | Metrics
TOCTTOU is a serious threat to Unix-style file systems. All the existing static detection methods have high false positive rate. There are two reasons: firstly, the function pairs which may cause TOCTTOU vulnerabilities are not defined and enumerated accurately; and secondly, the methods make an over-approximation of the program and omit a lot of useful information. In this paper, we first systematically examine the TOCTTOU pairs in the standard C library. On this basis, a static analysis method is presented to detect the TOCTTOU vulnerabilities. Vulnerability is expressed as a finite safety state property. At each program point, a value is associated to a set of states. To make the analysis more precise, the algorithm is inter-procedurally flow sensitive and intra-procedurally path sensitive. To achieve scalability, the safety state property of each procedural is analyzed independently and the inter-procedurally analysis is summary based. The experimental results show that this method can effectively find TOCTTOU vulnerabilities in C programs. In comparison with other static methods, this method can effectively reduce false positive rate.
Software Integrity Verification Based on VMM-Level System Call Analysis Technique
Li Bo, Li Jianxin, Hu Chunming, Wo Tianyu, and Huai Jinpeng
2011, 48(8):  1438-1446. 
Asbtract ( 598 )   HTML ( 0)   PDF (1692KB) ( 542 )  
Related Articles | Metrics
In virtualized cloud computing platform, the key security problem is to guarantee trustworthiness of the softwares which are running in the platform. Integrity measurement and verification has been proposed and studied by many researchers as an effective way to verify the integrity of computer systems. However, existing approaches have some limitations on compatibility, security and maintainability, and cannot be applied into the cloud computing platform. In this paper, we propose a approach named VMGuard, which leverages VMM to enable take integrity measurement outside the operating system. We adopt VMM-based system call interception technique to detect the execution of binaries. System call correlation and guest OS file system metadata reconstruction are proposed to verify the integrity of software in guest OS. We have developed a prototype of VMGuard and implemented it in two mainstream virtual machine monitors, Qemu and KVM, respectively. We also evaluate the effectiveness and performance overhead of our approach by comprehensive experiments. The results show that VMGuard achieves effective integrity measurement with less than 10% overhead.
An Design Approach of Trustworthy Software and Its Trustworthiness Evaluation
Tian Junfeng, Li Zhen, and Liu Yuling
2011, 48(8):  1447-1454. 
Asbtract ( 691 )   HTML ( 7)   PDF (2160KB) ( 470 )  
Related Articles | Metrics
With the continuous deepening of the application of software in sensitive fields such as finance, military affairs and economy, the requirement of software trustworthiness becomes more urgent. For the problem of the trust chain of Trusted Computing Group (TCG), which mainly ensure the static trustworthiness of computers and cannot ensure the dynamic trustworthiness of running software, we extend the trust chain of TCG by introducing a trustworthy engine between operating system and application software, and present a trust chain model of trustworthy software driven by the trustworthy engine. We also present an approach of trustworthy software design and its trustworthiness evaluation policies based on the trust chain model of trustworthy software. The software trustworthiness is merged into software design by introducing the trustworthy view which describes the trustworthy behavior trace of software and inserting checkpoint sensor at each checkpoint of trustworthy software. The software trustworthiness is realized by measuring software integrity and monitoring the behavior trace of running software. Experiments and analysis show that the trustworthy software designed with our approach can detect the anomaly of running software effectively, and the ability to detect the anomaly of software successfully of our designed software is better than that of the software based on the trust chain of TCG.
Detecting the Vulnerability Pattern of Writing Tainted Value to Tainted Address
Hu Chaojian, Li Zhoujun, Guo Tao, and Shi Zhiwei
2011, 48(8):  1455-1463. 
Asbtract ( 574 )   HTML ( 1)   PDF (1029KB) ( 505 )  
Related Articles | Metrics
Device drivers are lower level computer programs, which allow higher level computer programs to interact with hardware devices. Commonly, vulnerabilities in device drivers would be more devastating than that in applications. “Writing tainted value to tainted address” is a kind of vulnerability pattern, frequently existing in Windows device driver programs. In this paper, we first time describe this kind of vulnerability pattern in so many words, present a systematic method to detect it in binary Windows device driver programs automatically, and implement our method in a prototype tool called T2T-B2C. The method bases on de-compiling and static taints analysis technologies. Compared with other methods, our method could analyze native binary code as well as C code. Accordingly, T2T-B2C consists of two components called T2T and B2C respectively. Firstly, B2C translates binary files to C files by de-compiling; and then T2T uses static taint analysis technology to detect the vulnerable statement, which is writing tainted value to tainted address in the C code that B2C produced. We evaluate T2T-B2C with binary device drivers of several Windows anti-virus programs, and find 6 uncovered vulnerabilities. The results show that T2T-B2C is an applied vulnerability detecting tool that could be scalable to large programs.
Trusted Dynamic Measurement Based on Interactive Markov Chains
Zhuang Lu, Cai Mian, and Shen Changxiang
2011, 48(8):  1464-1472. 
Asbtract ( 439 )   HTML ( 2)   PDF (984KB) ( 527 )  
Related Articles | Metrics
Trusted computing ensures trustworthiness of a platform through extending the trust boundary from the root to the whole platform. Trusted measurement is invoked before the trust boundary is extended from one entity to including another. Static measurement, which takes place at startup, cannot ensure runtime trustworthiness, and therefore dynamic trusted measurement is indispensable to guarantee a computer platform to run dependably. According to dependability, availability and security of information and behavior, targets of trusted measurement are established. In present schemes of dynamic trusted measurement, the measurement of functionality is focused on, whereas dependability cannot be guaranteed without the measurement of performance. Based on interactive Markov chains (IMC), the measurement of performance feature besides function feature is introduced. In the expected behavior description, the function expectation is described through a model of transition system and the performance expectation is described through relating path probability indicating dependability to the time expectation in which a certain specific behavior function is achieved. By comparing the runtime evidence of a platform with a specific expectation, trusted verification on a combination of functionality and performance is achieved. The trusted dynamic measurement model based on IMC ensures dependability in the feature of performance besides function and guarantees trustworthiness of a platform across the board.
GoodRep Attack and Defense in P2P Reputation Systems
Feng Jingyu, Zhang Yuqing, Chen Shenlong, and Fu Anmin,
2011, 48(8):  1473-1480. 
Asbtract ( 428 )   HTML ( 0)   PDF (1218KB) ( 381 )  
Related Articles | Metrics
Reputation systems are playing critical roles in P2P networks as a method to assess the trustworthiness of peers and to combat malicious peers. However, the characteristic of aggregating ratings makes reputation systems vulnerable to be abused by false ratings, and thus offering opportunities for malicious peers. They can conspire with each other to form a collusive clique and unfairly increase the reputation of them. Under the cover of high reputation, malicious peers can masquerade as trusted ones and violate P2P networks arbitrarily. This attack model, called GoodRep, is described in this paper. In order to defend against GoodRep attack, the RatingGuard scheme is proposed to secure P2P reputation systems. This scheme is built with three functional modules: DC, DP and AD. The data collection (DC) module supports the collection of the previous rating data among raters. The data processing (DP) module measures the rating similarity of raters’ activities by analyzing these data. To identify GoodRep cliques, the abnormal detection (AD) module detects the abnormalities through clustering partition technology. The experimental results show that our RatingGuard scheme is effective in suppressing GoodRep attack, and the reputation system with RatingGuard gains higher detection ratio of malicious peers compared with the traditional schemes.
A Secure Routing Scheme for Opportunistic Networks Based on Identity-Based Encryption
Chen Xi, and Ma Jianfeng
2011, 48(8):  1481-1487. 
Asbtract ( 677 )   HTML ( 0)   PDF (988KB) ( 578 )  
Related Articles | Metrics
Opportunistic networks integrate the concepts of delay-tolerant networks, mobile ad-hoc networks and social networks. In opportunistic networks, the information can be transmitted and shared by the parallel opportunistic contacts between different mobile users without any pre-existing internet architecture. The social context information is exploited to formulate the routing and forwarding strategies which can improve the network performance efficiently in comparison with the traditional routing protocols. However, privacy is a primary challenge in opportunistic networks, for some social context information is sensitive and users dont want to expose such information to unfamiliar nodes. We propose a security scheme based on IBE (identity-based encryption) to protect the privacy of nodes and confidentiality of messages for social context-based routing in opportunistic networks. The efficient-PEKS (public key encryption with keyword search) is used to set up trapdoors for each nodes social attributes so that relay could compute the matching of social context between itself and destination node without getting any social attribute information from the destination node. Meanwhile, our scheme encrypts the messages by employing the combination of nodes social context as the public key to ensure the confidentiality. Simulation results show that implementing our security scheme does not induce any negative impact on the delivery probability and the average delay, which indicates that our security scheme is practical and effective for opportunistic networks.
MRRbot: A Multi-Role and Redundancy-Based P2P Botnet Model
Li Shuhao, Yun Xiaochun, Hao Zhiyu, and Zhai Lidong
2011, 48(8):  1488-1496. 
Asbtract ( 625 )   HTML ( 2)   PDF (2187KB) ( 529 )  
Related Articles | Metrics
As common platforms of cyber attacks, botnets cause great damage and bring serious threats. Though the defenses against current botnets are effective, botnets evolution gives defenders a big challenge, which is worse with the development of tri-network integration. Therefore, it is indispensable to predict future botnets for timely defense. In this paper, we summarize the weaknesses of existing botnets, and present the design of a mutli-role and redundancy-based P2P botnet model (MRRbot). In this model, fake bots are created to be an important role that can help enhance bots credibility and pertinence, and a redundancy mechanism and a selection algorithm are designed to improve the invisibility and robustness of the command and control channel. Furthermore, MRRbot is analyzed and evaluated on its controllability, efficiency, invulnerability, and its robustness is compared with others previous work. Both theoretical analysis and experimental results demonstrate that MRRbots botmasters can quickly publish commands to each bot with the probability close to 100%, even suffering effective defenses. MRRbot is more dangerous with high controllability, efficiency, robustness and invulnerability, which is likely to be adopted by attackers in the future. Finally, a defense system against this advanced botnet, which is based on the volunteer network, is suggested.
Analysis of Attack Graphs Based on Network Flow Method
Wu Jinyu, Jin Shuyuan, and Yang Zhi,
2011, 48(8):  1497-1505. 
Asbtract ( 517 )   HTML ( 0)   PDF (1508KB) ( 561 )  
Related Articles | Metrics
An intruder often breaks into a network through a chain of exploits. Each exploit in the chain lays the groundwork for subsequent exploits. Such a chain is called attack path. Attack graph, one kind of succinct representation of attack paths, is an important tool for analyzing security vulnerabilities in networks. Security analysts use attack graphs for detection and defense. However, attack graphs do not directly provide a solution to protect key resources from invasion. And finding a solution by hand is error-prone and tedious. Existing automated methods for finding such solutions are less efficient and scale poorly. In this paper, we propose solutions based on network flow method to automate the task of hardening a network against multi-step intrusions. We discuss the optimization critical attack sets problem and the optimization critical initial condition sets problem. Then we define the atomic-attacks split weighted attack graph (ASWAG) and the initial-condition split weighted attack graph (ISWAG), and convert the former two problems into the minimum S-T cut problems in ASWAG and ISWAG. The conversions are proved to be equivalent. Two algorithms with polynomial time complexity are proposed. Simulation results show that the algorithms are more efficient and scale better than the existing methods. We can use them to analyze large-scale attack graphs.
Anomaly Detection Using Multi-Level and Multi-Dimensional Analyzing of Network Traffic
Zheng Liming, Zou Peng, and Jia Yan
2011, 48(8):  1506-1516. 
Asbtract ( 580 )   HTML ( 4)   PDF (1628KB) ( 864 )  
Related Articles | Metrics
With the rapid growth of the categories and numbers of network attacks and the increasing network bandwidth, network traffic anomaly detection systems confront with both higher false positive rate and false negative rate. A traffic anomaly detection system with high precision is presented in this paper. Firstly, we use multi-level and multi-dimensional online OLAP method to analyse traffic data. In order to reduce the computational and space complexity in this analytical process, some optimization strategies are applied in building DetectCube, the minimal directed Steiner tree algorithm is adapted to optimize multiple query on the Cube, and the traffic data is summarized at appropriate level with the help of discovery-driven exploration method. Secondly, a concept of entropy to measure the distribution of traffic on some particular dimensions is given and the values of entropy in every window and every Group-By operation are collected to form multiple time series of entropy. Finally, we employ one-class support vector machine to classify this multi-dimensional time series of entropy to achieve the purpose of anomaly detection. The proposed traffic anomaly detection system is validated and evaluated by comparing it with existed systems derived from a lot of real network traffic data sets. Our system can detect attacks with high accuracy and efficiency.
An Algorithm Based on LRU and SCBF for Elephant Flows Identification and Its Application in DDoS Defense
Xie Dongqing, Zhou Zaihong, and Luo Jiawei
2011, 48(8):  1517-1523. 
Asbtract ( 668 )   HTML ( 3)   PDF (1166KB) ( 575 )  
Related Articles | Metrics
It is valuable for defending large-scale network security incidents to identify elephant flows in time and accurately. Aiming at the disadvantages of single use of LRU and SCBF in identifying elephant flows, an elephant flow identification algorithm based on LRU and SCBF, LRU_SCBF, is proposed. The LRU_SCBF uses two-level structure which is LRU list and SCBF array. The arrival mice flow is stored into the SCBF at first. Then it is extracted to the LRU when its count is greater than a certain threshold. If the LRU is full, the mice flow is out from LRU according to the LRU strategy and put into the SCBF, and so on. The elephant flows and mice flows are stored separately. Theoretical analysis and simulated experiment show that the storage complexity is low, and the false positive rate and the false negative are both low in LRU_SCBF. It makes the extraction of elephant flows accurate and timely in high-speed network. Applying this in DDoS defense, we realize the detection and traceback against DDoS attacks in time.
An Authorization Model and Implementation for Vector Data in Spatial DBMS
Zhang Desheng, Feng Dengguo, and Chen Chi
2011, 48(8):  1524-1533. 
Asbtract ( 421 )   HTML ( 0)   PDF (2264KB) ( 341 )  
Related Articles | Metrics
Spatial DBMS and its applications have become more and more popular today. It makes our daily life more convenient and comfortable, but it also brings serious threats to security and privacy. Most applications require a fine-granularity flexible access control model which supports negative authorization; meanwhile, they also require an authorization implementation with high performance. According to the security requirements of the applications of vector data in spatial DBMS, a predicate-based access control model(PBAC) is presented, and predicate rewrite technique is adopted to implement the authorization model in spatial DBMS. Compared with the existing works, in our model, spatial predicate is adopted to specify the authorized objects which improves the flexibility of expression, and negative authorizations are also supported; in our implementation, predicate rewrite technique is used which not only avoids an additional spatial query in authorization enforcement but also assures the low coupling degree between implementations of vector datas authorization and spatial DBMS and the convenience of eliminating spatial predicate redundancies. Experiments results show that our method could satisfy the security requirements and realize the effective authorization of vector data in spatial DBMS.
Detecting and Managing Hidden Process via Hypervisor
Wang Lina, Gao Hanjun, Liu Wei, and Peng Yang,
2011, 48(8):  1534-1541. 
Asbtract ( 883 )   HTML ( 1)   PDF (2714KB) ( 590 )  
Related Articles | Metrics
Malicious process is a significant threat to computer system security, which is not only able to compromise the integrity of system, but also getting increasingly stealthy and elusive when facilitated with stealthy rootkit techniques. Conventional detection tools are deployed and executed inside the very host they are protecting, which makes them vulnerable to deceive and subvert. In order to improve the accuracy of detection and the ability of tamper resistance, a VMM-based hidden process detection system located outside the protected virtual machine is designed and implemented. Using virtual machine introspection mechanism, the system implicitly inspects the low-level state of the protected virtual machine, and then reconstructs the high level OS abstractions (process queues) which are needed for analysis by semantic view reconstruction technique. Based on cross-view validation principle, the system compares various process queues between internal and external view, and finally identifies the target hidden process through their discrepancies. In the meantime, this system facilitates response mechanism for reporting more specific information (such as network port, real memory occupation etc) about the hidden process to the administrator and supplies the interfaces for hidden process termination and suspension. The experiments on some real-world rootkits which can hide process are designed to validate the effectiveness and feasibility of the detection system.
Detecion Approach for Covert Channel Based on Concurrency Conflict Interval Time
Wang Yongji, Wu Jingzheng, Ding Liping, and Zeng Haitao
2011, 48(8):  1542-1553. 
Asbtract ( 558 )   HTML ( 1)   PDF (2800KB) ( 375 )  
Related Articles | Metrics
Concurrency conflicts may bring data conflict covert channel in multilevel secure systems. The existing covert channel detection methods have the following flaws: 1) Analyzing conflict records with single point, so the invaders can evade to be detected; 2) Using single indicator will bring false positive and false negative. We present a detection method based on conflict interval time called CTIBDA in this paper. This method solves the above problems: 1) Analyzing the conflict records with subject and object can prevent intruders from dispersing; 2) Using both the distribution and the sequence of intervals between transactions conflicts as indicators. The experimental results show that this approach can reduce the false positive and false negative and increase the accuracy. CTIBDA is suitable for online implementation and can be universally applied to concurrency conflict covert channels in other scenarios.
The Research and Application of a Specific Instruction Processor for AES
Xia Hui, Jia Zhiping, Zhang Feng, Li Xin, Chen Renhai, and Edwin H.-M. Sha
2011, 48(8):  1554-1562. 
Asbtract ( 569 )   HTML ( 0)   PDF (964KB) ( 543 )  
Related Articles | Metrics
Encryption algorithm has been used widely in the embedded trusted computing domain, so how to improve its execution efficiency has become an important issue. The Advanced Encryption Standard (AES) is a new encryption algorithm which has been widely adopted in the field of trust computation due to its high security, low cost and high enforceability. This paper employs a new instruction set architecture (ISA) extension method to optimize this algorithm. Based on the electronic system level (ESL) methodology, a commercial processor tool on the basis of language for instruction-set architectures (LISA) is used to construct an efficient AES application specific instruction processor (AES_ASIP) with the objective to improve the AES algorithm execution efficiency. Finally the AES_ASIP model is implemented in the FPGA (field-programmable gate array) platform. A series of simulations have been conducted to evaluate the performance of the AES_ASIP model. Experimental results show that our processor improves 58.4x% in the execution efficiency and saves 47.4x% in the code storage space compared with the ARM ISA processor.
Histograms Difference and Quantitative Steganalysis of JPEG Steganography Based on Relative Entropy
Yang Chunfang, Liu Fenlin, and Luo Xiangyang
2011, 48(8):  1563-1569. 
Asbtract ( 655 )   HTML ( 1)   PDF (701KB) ( 367 )  
Related Articles | Metrics
For the histogram features, which are usually used in image steganalysis, a category of algorithms based on relative entropy are given to measure the difference between the histograms of the given image and the estimated cover image. And a quantitative steganalysis method based on relative entropy is proposed for JPEG image steganography. Firstly, according to the likelihood ratio test, which is the optimum test for two hypotheses, the superiority of measuring the distance between the two histograms based on relative entropy is analyzed, and two algorithms are given to measure the difference between the two histograms based on relative entropy. Secondly, for the histogram-like features used in the blind steganalysis methods, the new histograms difference features are calculated by the given algorithms, and fed to the support vector regression to train the quantitative steganalyzers for JPEG image steganography. Finally, the proposed steganalysis methods are applied to the quantitative steganalysis of two categories of typical JPEG image steganography: JSteg and the improved F5 steganography. Experimental results show that compared with the other typical algorithms, the steganalyzers based on the histograms difference features calculated by the proposed algorithms have higher accuracy and stability.