Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2012, Volume 49 Issue 3
Paper
Cache Attacks on Block Ciphers
Zhao Xinjie, Wang Tao, Guo Shize, and Liu Huiying
2012, 49(3):  453-468. 
Asbtract ( 690 )   HTML ( 1)   PDF (4315KB) ( 361 )  
Related Articles | Metrics
In recent years, cache attack has become one of the most threatening attacks to block ciphers that implemented on microprocessors. The research in this area is a hot spot of cryptographic side channel attacks. This paper summarizes the cache attacks on block ciphers. The mechanism of cache and the side channel information difference of cache hit/miss are described. The characteristics of cache accesses and corresponding information leakages are analyzed. Several typical cache attack techniques on block ciphers are well discussed from the aspects of attack model, analysis method, research progress. Finally, the features of cache attacks are summarized, the current research pitfalls are provided, and the future directions of cache attacks are given.
Cache Attacks on Block Ciphers
Zhao Xinjie, Wang Tao, Guo Shize, and Liu Huiying
2012, 49(3):  453-468. 
Asbtract ( 255 )   HTML ( 1)   PDF (4315KB) ( 528 )  
Related Articles | Metrics
In recent years, cache attack has become one of the most threatening attacks to block ciphers that implemented on microprocessors. The research in this area is a hot spot of cryptographic side channel attacks. This paper summarizes the cache attacks on block ciphers. The mechanism of cache and the side channel information difference of cache hit/miss are described. The characteristics of cache accesses and corresponding information leakages are analyzed. Several typical cache attack techniques on block ciphers are well discussed from the aspects of attack model, analysis method, research progress. Finally, the features of cache attacks are summarized, the current research pitfalls are provided, and the future directions of cache attacks are given.
A Survey of CAPTCHA Technology
Li Qiujie, Mao Yaobin, and Wang Zhiquan
2012, 49(3):  469-480. 
Asbtract ( 799 )   HTML ( 5)   PDF (2728KB) ( 448 )  
Related Articles | Metrics
Completely automated public turing test to tell computers and humans apart (CAPTCHA), also known as human interactive proof (HIP), is a program that can generate and grade tests that most humans can pass and current computer programs cannot. CAPTCHA provides a way for automatically distinguishing a human from a computer program. It has emerged as a standard network security technology and has been successfully used in many popular Web sites, such as Google, Yahoo!, and Microsoft. CAPTCHA is designed based on certain unsolved artificial intelligence (AI) puzzles and can be categorized into three main types, namely, text-based, image-based, and sound-based, according to its carrier and concept. Text-based CAPTCHA based on character recognition has been widely used. Image-based CAPTCHA that exploits harder problems in computer vision is still in the research stage. Sound-based CAPTCHA, which is aimed at providing access to the visually impaired, is a complement to visual CAPTCHAs. This paper introduces the history and the general principle of CAPTCHA. The latest research progress in the design of CAPTCHA is expounded by enumerating typical examples. Their usability and security are also discussed. Finally, the directions for future study as well as the problems to be solved are pointed out.
IRC Botnets’ Homology Identifying Method Based on Dynamic Time Warping Distance of Communication Feature Curves
Jin Xin, Li Runheng, Gan Liang, and Li Zhengyi
2012, 49(3):  481-490. 
Asbtract ( 510 )   HTML ( 0)   PDF (2503KB) ( 410 )  
Related Articles | Metrics
IRC botnet can be regarded as a collection of compromised computers (called Zombie computers) running software under the commandandcontrol infrastructure constructed by IRC server. The connection between botnet server and bots are usually very dynamic. In order to describe a botnet at a finer granularity, some work identify homologous IRC botnets based on similarity of IRC botnets. The similarity of IRC botnets are measured by multidimensional data obtained from the infiltrated botnets, that is, some information, such as server version, IP address of IRC server, DNS name of IRC server, IRC server/network name, and botmaster ID, can be obtained by joining the command and control channel.Because such information doesn’t represent the essential characteristic of botnets, and with the upgrade of server version, obtaining the information such as botmaster ID becomes more difficult and the error ratio of the model is hard to be bounded. A method is proposed, which identifies homologous botnets by extracting communication feature curves and computs the dynamic time warping distance between the curves, distills and uses the feature points of communication curves to increase the precision, and uses improved LB_PAA to reduce calculated amount. Experiments were carried out and the error rates were evaluated and shown.
A DPA Resistant Technology Based on Register Switching Time Randomization
Yue Daheng, Qi Shubo, Li Shaoqing, and Zhang Minxuan
2012, 49(3):  491-498. 
Asbtract ( 357 )   HTML ( 0)   PDF (2063KB) ( 356 )  
Related Articles | Metrics
Differential power analysis (DPA) attack is one of the most effective side channel attack technologies against the security chip. The success of DPA attack depends on two aspects: one is the correlation between power consumption and data, and the other is the synchronization of target function signals. Countermeasures based on power balanced logic styles aim to eliminate the power dependence. However, these methods introduce large performance, power, and area cost. Another effective way to counteract DPA attack is making the target function signals asynchronous. As the output signals of registers are always chosen as target function, the asynchronism can be achieved by randomizing the register switching time. The resistibility of register switching time randomization against DPA attack is analyzed theoretically at first. Then a novel countermeasure based on this idea is presented. The switching time of critical registers are randomized by the phase difference of different clock domains. The computing method of timing constraints to data and control signals transferred between the different clock domains is introduced. And the effect of clock frequencies on the randomization is analyzed. This countermeasure is implemented in an AES coprocessor, and the simulation experiment proves the correctness of the theoretical analysis. The simulation results show that this technology can improve the preventing ability of the cryptographic circuit under the special working clock frequency.
A Multi-Policies Threshold Signature Scheme with Group Verifiability
Wang Feng, Zhou Yousheng, Gu Lize, and Yang Yixian
2012, 49(3):  499-505. 
Asbtract ( 523 )   HTML ( 1)   PDF (682KB) ( 395 )  
Related Articles | Metrics
The multi-policies threshold signature is very practical in the research of signature. In this kind of scheme, multiple secret keys are shared among a group of users, and each secret key has its specific threshold value. Moreover, the secret keys corresponding to different threshold values can be used to sign documents depending on their significance. However, all current multi-polices threshold signature schemes cannot deal with the group verifiability. In order to solve this problem, a group-verifiable multi-polices threshold signature scheme is proposed in this paper. In the presented scheme, the group verifiability can be realized and the property of the traceable multi-polices threshold signature keeps unchanged. Namely, not only the threshold values of the signing group but also the threshold values of the verifiable group can change with the secure classifications of the signing documents. In the process of verifiability, the threshold values of the verifiable group depend on the significance of the documents. The main merit is that the signing group and the verifiable group can swap their roles. Therefore, the two groups can sign and verify each other, i.e., it is a two-way signing and verifying scheme. The analysis shows that the proposed scheme is also secure and practical.
EasiTest: A Multi-Radio Testbed for Heterogeneous Wireless Sensor Networks
Zhao Ze, Liu Qiang, Li Dong, and Cui Li
2012, 49(3):  506-517. 
Asbtract ( 525 )   HTML ( 0)   PDF (3595KB) ( 347 )  
Related Articles | Metrics
In this paper we propose a heterogeneous multi-radio testbed for wireless sensor networks. Sensor nodes with medium-high speed radio and medium-low speed radio, namely EZ271 and EZ521 respectively, are developed for the testbed. The testbed can support both the study and applications of large-scale, heterogeneous sensor networks. An administration platform is provided to monitor and control the testbed. EasiTest enjoys high flexibility, powerful processing capability and ease of expansion. The performance of sensing, data collecting, data management, resource allocation, server status checking, nodes status checking and configuration, resources resignation and share modulus can be evaluated in the testbed. We can also config the preferences, resign the resources and share the resources via the friendly user interface of the testbed, where various protocols such as 802.15.4 and 802.11 can be tested by users. Multiple layer protocols can also be tested in the testbed which can greatly improve the research and deployment efficiency of various types of sensor networks. EasiTest provides a powerful tool not only for the study and evaluation of large scale, heterogeneous sensor networks, but also for quick prototyping of practical WSN applications. To demonstrate the capability of the testbed, data transmission with single-radio and multi-radio are tested on this testbed, and the results are also given.
An Architecture of Mobile Delay Tolerant Networks and Its Application
Zhou Xinyun, Li Zhi, Li Liqun, and Sun Limin,
2012, 49(3):  518-528. 
Asbtract ( 517 )   HTML ( 0)   PDF (3192KB) ( 280 )  
Related Articles | Metrics
Using communication opportunities generated from nodes’ moving, mobile delay tolerant networks (MDTN) can offer high usability wireless communication services with low cost in non-fully-connected networks. A general MDTN architecture (GMA) is proposed in this paper. In GMA, MDTN is divided into five layers, which are application layer, assembly layer, forwarding layer, link management layer, and device layer. This architecture focuses on the effective utilization of communication opportunities, which includes the analysis of mobility model, the fast finding of adjacent neighbors and trajectory forecasting, the efficient forwarding of data packets and the creation of possible communication opportunities. Based on GMA, users can use the general interfaces of application layer to create useful MDTN applications. In urban environment, the networks formed by mobile buses are naturally special cases of MDTNs, which have a number of promising advantages including large coverage and fixed trajectory. Hence, it is suitable for urban information gathering and dissemination. Based on GMA, a 3-tier network framework is proposed for the bus-based MDTN and then some key issues are concretely studied. Several schemes are proposed including smart Internet access, fair-aware data gathering, geographic data dissemination, and buffer management and scheduling. Based on real world experiments and traces, extensive evaluations are carried out, which show that the proposed architecture and schemes perform efficiently with high robustness.
A Distributed Planar t-Spanner Topology Control Algorithm in Wireless Sensor Networks
Chen Zhigang, Xu Pengfei, and Deng Xiaoheng
2012, 49(3):  529-540. 
Asbtract ( 481 )   HTML ( 0)   PDF (3317KB) ( 293 )  
Related Articles | Metrics
Under the condition of maintaining the network connectivity, each sensor varies adaptively its transmission power, which can minimize the power consumption and reduce the radio interference, so as to prolong the lifetime of wireless sensor networks. A new geometry structure briefly named PSLDel is presented based on the Voronoi tessellation and the local Delaunay triangulation, and the efficient distributed algorithms to construct the new structure are proposed based on the message exchange. With PSLDel as the underlying logical topology of wireless sensor networks, each sensor can adjust its transmission power to minimum according to the farthest logical neighbor, which forms a network topology with desirable features, such as connectivity, sparseness, planar and t-spanner. Theoretic analyses and simulation results show that the performance of PSLDel is close to that of UDel, which is globally constructed, in terms of the logical neighbor, the minimal transmission power and the radio interference, but the network delay of PSLDel is slightly better than that of UDel. Meanwhile, compared with AUDel, which is another approximate structure of UDel and can also be locally constructed, the communication cost of constructing PSLDel can be reduced by 55 percent at least. Thus the properties can improve the energy efficiency and prolong the lifetime of wireless sensor networks.
Design of Mobile Sink Node for SDMA Applications in WSN
Zhang Xiwei, and Chen Guihai
2012, 49(3):  541-549. 
Asbtract ( 345 )   HTML ( 0)   PDF (3536KB) ( 325 )  
Related Articles | Metrics
Recent years have witnessed a surge of Interest in efficient data collection schemas in wireless sensor networks (WSNs). However, there are some serious problems existing in static wireless sensor networks such as energy hole, overlapping and hot spots, etc. An efficient method to solve these problems is using mobile nodes. Through data collection and relaying, mobile nodes can reduce the data of transmitting between the static nodes. It also conserve the power of static nodes and therefore can prolong the lifetime of network. In this paper, we introduce the design and implementation of a mobile sink node based on ARM7 core chip, named DataTruck, which has a large storage and rapid speed. Furthermore, we integrate a smart antenna system to collect the data from multiple static nodes concurrently which use the same frequency. We consider applying space-division multiple access (SDMA) technique to data gathering by equipping the DataTruck with two antennas. With SDMA, two distinct compatible sensors may successfully make concurrent data uploading to the DataTruck. Experiments show that DataTruck can collect data efficiently and reduce the average data delay by using SDMA technology.
Survivability Research on the Impact of Node Failure in MANET
Ma Chi, Li Zhi, Zhang Hong, and Liu Fengyu
2012, 49(3):  550-557. 
Asbtract ( 417 )   HTML ( 0)   PDF (2258KB) ( 306 )  
Related Articles | Metrics
Through analyzing the survivability theory of traditional information system, this paper puts forward an important index named DRUA (delivery rate under attack) to evaluate the MANET survivability, and an evaluation formula for this index is also provided based on the theory of space probability. In order to verify the formula, some experiments are conducted by using NS2 simulation tools. In these experiments, the famous reactive routing protocol AODV is selected as the communicating protocol. Other reactive routing protocols are designed similar as AODV, which has better performance. At the same time, three kinds of attacking scenes are designed for comparison (they are random attacking, deliberate attacking and randomly neighbor attacking). The experimental results show that when the number of CBR flows is comparatively small, random attacking and deliberate attacking have little negative effect on the performance of MANET or even can improve the performance of MANET. On the contrary, when there are a large number of CBR flows in the case of large-scale MANET, random attacking can greatly influence the performance of MANET in a negative way. This means the evaluation formula is more suitable for large-scale MANET. In addition, randomly attacking neighbor nodes can reduce the performance obviously even when there are a small number of CBR flows.
Environment-Aware Quantitative Assessment Model for Service Availability in MANET
Wang Wenbin, Sun Qibo, and Yang Fangchun
2012, 49(3):  558-564. 
Asbtract ( 544 )   HTML ( 0)   PDF (1335KB) ( 411 )  
Related Articles | Metrics
In mobile ad hoc networks (MANETs), mobile nodes share resources and collaborate to do work by using Web service technologies, and the service is defined as any hardware or software entity that resides on a mobile device or platform. However, the complex environment, dynamic network links and heterogeneous terminals make the service availability constantly vary and be difficult to assess in quantitative model. In this paper, after analyzing the running environment of services and the environmental factors that impact the service availability, an environment-aware quantitative assessment model for service availability (EQAM-SA) is proposed, which is touched off by the service requesting. In EQAM-SA, the running parameters of services are obtained and the quantitative evaluation indicators are then computed in real time as soon as a service request is received. The weights of these indicators are different and are used to assess services availability in different application scenarios, so Delphi approach is used to determine the weights. Then, the analytic hierarchy process (AHP) approach and multi-attribute evaluation with a weighted sum model are adopted to evaluate the service availability. Finally, the feasibility and effectiveness are validated by numbers of simulations.
An Interpolation Method Using an Improved Markov Model
Du Yi, Zhang Ting, Lu Detang, and Li Daolun
2012, 49(3):  565-571. 
Asbtract ( 491 )   HTML ( 0)   PDF (1483KB) ( 387 )  
Related Articles | Metrics
Reconstruction always uses some kinds of interpolation methods and its accuracy can be improved by using multiple data with different dimensions, resolutions or types. COSGSIM (sequential Gaussian co-simulation) has been a widely used geostatistical interpolation method and is introduced into other fields for prediction and reconstruction in recent years because it can estimate unknown values by multiple known data including known primary data (hard data) and some auxiliary data (soft data). The LMC (linear model of coregionalization) and the original MM1 (Markov model 1) are proposed for COSGSIM to fulfill the integration of the primary data and auxiliary data. The main limitation of LMC is the requirement of modeling a positive definite cross covariance matrix for different variables. MM1 is a reasonable model only when the primary data are defined on the larger volume support than the auxiliary data. Then MM2 (Markov model 2) for such a case is presented to meet the above condition in an improved Markov model. MM2 screening hypothesis indicates that an auxiliary datum screens the influence of any other auxiliary datum on its primary collocated datum. Experimental results show that the interpolated results of COSGSIM under MM2 are much better than those of COSGSIM under MM1 if the primary data are defined on a larger volume support.
A PSVM-Based Active Learning Method for Mass Detection
Wang Ying, Gao Xinbo, Li Jie, and Wang Xiumei
2012, 49(3):  572-578. 
Asbtract ( 433 )   HTML ( 0)   PDF (1756KB) ( 360 )  
Related Articles | Metrics
In mammograms, masses always vary widely in their shapes and densities, and yet share common appearances with the normal tissues. This point extremely increases the detection difficulty and also impacts the performance of the automatic mass detecting system. To improve the sensitivity of mass detection system, we propose an active learning scheme to detect various masses on mammograms. Firstly, the pairwise constraints are introduced, and the scheme conducts with pairwise support vector machine (PSVM) by involving the relationship among different samples into the classification procedure. Furthermore, according to the detection results, the missed samples with their uncertainty information are combined with the matched feature distance among different samples to provide for re-consideration. Then, with the representative information, the proposed PSVM-based method actively selects the pairwise samples that should be feed back to the training set. The experimental results show that the proposed active learning method with PSVM could make full use of the information of samples, and thus, it could get satisfactory detection rates and false positives during the detection procedure. The method can also get good compromise between the sensitivity and specificity, and the whole learning scheme has better generalization ability and detection performance in comparison with some existing detection methods.
CSMP: Compressive Sensing Matching Pursuit Based on Restricted Isometry Property
Xie Zhipeng, and Chen Songcan
2012, 49(3):  579-588. 
Asbtract ( 680 )   HTML ( 0)   PDF (1018KB) ( 457 )  
Related Articles | Metrics
Compressive sensing consists of compressed sampling and sparse reconstruction, which is a method to compute sparse solution for underdetermined linear systems. Large scale and fast reconstruction method has become an active research topic of compressive sensing. In this paper, a matching pursuit algorithm is presented and named CSMP. It adopts iterative framework and best s term approximation to update signal support and magnitude. Convergence analysis is developed based on restricted isometry properties (RIP). The sufficient condition for the convergence of CSMP is established with 3s order restricted isometry constant (RIC) less than 0.23, which relaxes the RIC condition for recovering s sparse signal by matching pursuit and improves the convergence speed. In order to adapt for large scale sparse signal reconstruction, the proposed method is equipped with matrix-vector multiplication operator which can select subsets of both random projection measurements and sparse bases, therefore becoming able to utilize discrete cosine transform and wavelet transform, avoiding explicit storage of large scale matrix. Compressive sampling and reconstruction experiments are conducted on 2\+{20} sparse Gaussian signal with random support and on 512 by 512 Lenna image. Comparisons with other algorithms demonstrate that the proposed method is stable and fast for large scale sparse signal reconstruction in compressive sensing.
Multi-resolution Data Storage Method for Region Query in Sensor Networks
Ning Zhao, Shi Shengfei, Li Jianzhong, and Wang Chaokun
2012, 49(3):  589-597. 
Asbtract ( 501 )   HTML ( 1)   PDF (1792KB) ( 358 )  
Related Articles | Metrics
Under the background of wireless sensor network, because large communication energy cost accounts for a substantial proportion in the total energy cost in nodes, and users may propose some query requirements with different resolutions from coarse to fine, it’s necessary to establish multi-resolution data storage mechanism in sensor network. Aiming at multi-resolution data storage problem in wireless sensor network, firstly a multi-resolution data compression and storage strategy called MDCS is proposed cautiously, which instructs nodes in the wireless sensor network to produce approximations with multi-resolutions. Then nodes store approximations hierarchically in the network which are dealt as follows. In low-level sensor nodes, sensor data are ranked and approximated using spatial correlations inside region unit; in high-level nodes, sensor data are dealt by approximation and clustering technologies using spatial correlations between adjacent region units. These operations produce approximation results with resolutions from coarse to fine. Secondly, a region query processing method based on MDCS is proposed, which can be used to do region query processing operations according to the resolution threshold given by the user, and then results would be sent back to the user. Simulation experiment shows that multi-resolution region query operations can be efficiently supported and significant benefits can be achieved with this region query processing method based on MDCS.
An Incremental Method for Mining Generalized Association Rules Based on Extended Canonical-Order Tree
Mao Yuxing and Shi Baile
2012, 49(3):  598-606. 
Asbtract ( 315 )   HTML ( 0)   PDF (2508KB) ( 340 )  
Related Articles | Metrics
Mining generalized association rules is one of the important research areas in data mining. In real life applications, the transaction database is updated frequently. It makes the maintenance of generalized association rules one of challenging research work. In this paper, firstly, by analyzing and summarizing all the update cases of the taxonomy data, several special properties of updating are concluded; secondly, we project the transaction databases to a new compact structure called GECT (generalized extended canonical-order tree) composed of two header tables that can be used to mine the whole updated tree and the incremental tree. Thirdly, we propose an incremental updating algorithm GECT-IM, which finds most updated frequent itemsets by scanning the updating transactions set instead of the original database; To tackle the limit of GECT-IM, which still need scan the GECT when the infrequent itemsets become frequent, we propose a further optimized structure called PGECT (pre-large generalized extended canonical-order tree) and an efficient algorithm PGECT-IM. Within the certain updating scope, it can find all the updated frequent itemsets without rescanning the original PGECT. The experiments on synthetic datasets show that our algorithms, both GECT-IM and PGECT-IM, are not only correct and complete but also outperform the well-known and current algorithms.
Logical Encoding Methods in Intelligent Planning
Lü Shuai, Liu Lei, Wei Wei, and Gao Bingbing
2012, 49(3):  607-619. 
Asbtract ( 508 )   HTML ( 3)   PDF (1457KB) ( 431 )  
Related Articles | Metrics
The design and implementation of logical encoding methods are the key issues of translation based planning methods, which need to translate a given planning problem to a series of other classical solvable problems during planning procedures. All the logical encoding methods need to consider the logical representations and reasonings based on the crossponding propositional logic, first-order logic, multi-value logic, probabilistic logic, modal logic, epistemic logic, or the other adopted non-classical logics. This paper introduces the concrete details of the state-of-the-art logical encoding methods in intelligent planning, which include linear encoding, Graphplan based encoding, state based encoding, action based encoding, proposition based encoding, transition based encoding, lifted casual encoding, multi-value variable based encoding, ordered binary decision diagram based encoding, constraint satisfiabilitiy based encoding and so on. It also introduces the possible needed encoding methods of probabilistics, epistemic properties, modal assumptions, and flexible constraints for planning operations or states of some proposed abstract planning domain problems, whose formal characteristic expressions are still disputed. After considering experimental results of International Planning Competition and relevant papers, we conclude their corresponding soundness and possibility, and also application prospects in other relevant areas. Finally, we propose the challenges and possible responding methods, and also possible hotspots of them.
An Ant Colony Optimization Algorithm Based on Dynamic Evaporation Rate and Amended Heuristic
Liu Quan, Chen Hao, Zhang Yonggang, Li Jiao, and Zhang Shenbin
2012, 49(3):  620-627. 
Asbtract ( 315 )   HTML ( 0)   PDF (1652KB) ( 458 )  
Related Articles | Metrics
Research on swarm intelligence provides a better way for distributed control and optimization. Much study has been done on swarm intelligence such as ACO(ant colony optimization), and many applications also have been made in the field of combinatorial optimization. However, when solving combinatorial optimization problems, especially those problems with large scale, slow convergence and easy stagnation still restrain the algorithms being much more widely used. The DEAHACO algorithm is presented, in which a mechanism of dynamic evaporation rate is used to achieve better balance between solution efficiency and solution quality, avoiding algorithm falling into local optimum. To speed up the convergence of the algorithm, we redefine the heuristic information to guide the algorithm converge fast. A boundary symmetric mutation strategy is introduced to get variation of iteration results symmetrically, which not only enhances the mutation efficiency, but also improves the mutation quality. Experimental results show that the DEAHACO algorithm has better performance than other algorithm, and its convergence rate increases by 20% or more. Furthermore, the DEAHACO algorithm in other classic TSP instances also showes good performance.
Structured Ensemble Learning for Email Spam Filtering
Liu Wuying, and Wang Ting
2012, 49(3):  628-635. 
Asbtract ( 429 )   HTML ( 1)   PDF (1355KB) ( 448 )  
Related Articles | Metrics
In order to resolve the conflicts between low computational complexity and high classification accuracy in email spam filtering algorithms, a structured ensemble learning idea is proposed within the multi-field learning framework, which combines the results of multiple base classifiers according to documental structures to pursue higher classification performance. Multiple light base classifiers are generated by string features of email documents, and a string-frequency index is used to store labeled data, which conduces to that the time cost of each updating or each searching is a constant level. According to the multi-field feature of email documents, two linear combination weights are proposed separately based on historical classification effectiveness of field classifiers and current classification contribution of field documents. Considering the historical classification effectiveness of field classifiers and the current classification contribution of field documents, a compound linear combination weight is proposed, which can improve overall classification accuracy. The experimental results on the TREC spam filtering task of immediate full feedback show that the compound-linear-combination-weight-based structured ensemble learning method can complete the filtering task in high speed (47.24 min), whose overall performance 1-ROCA is comparable to the best one (0.0055) among the participators in the TREC 2007 spam track.
Logical Volume High Performance Snapshot Based on Out-of-Band Storage Virtualization
Zhou Wei, Tan Huailiang, Yi Letian, and Shu Jiwu
2012, 49(3):  636-645. 
Asbtract ( 543 )   HTML ( 0)   PDF (2260KB) ( 346 )  
Related Articles | Metrics
Snapshot implemented at logical volume level adapts to different underlayer physic storage devices, and transparently provides data protection service for many kinds of upper layer applications. A scalable metadata managing strategy is proposed, and with this strategy, storage system can freely support more than one type of snapshot volume to meet different requirements of applications. By compacting snapshot index data and constructing composite snapshot volume list with Finesnap volume and Checkpoint volume, storage system maintains high performance and high resource utilization ratio while the time to recovery from snapshot volume being short. After adopting self-adaptive snapshot generation policy, modification to data block will be traced, and the time interval of generating a new snapshot volume can be dynamically shifted to follow the waving of application loads. The snapshot system LVHPsnap built on the above technologies performs good performance in representative experiments. For example, when the ratio of Finesnap volume to Checkpoint volume is 4 to 1 and 8 to 1, the performance of LVHPsnap rises by 109.45% and 130.45% as the performance of cluster virtualization system LVM2, and the used space of LVHPsnap is only 43.40% and 31.35% of LVM2.
Dynamic Reconfiguration in Reconfigurable System-on-Chip Design Flow
Chen Yu, Li Renfa, Zhu Hai, and Yuan Hu
2012, 49(3):  646-660. 
Asbtract ( 335 )   HTML ( 0)   PDF (4974KB) ( 405 )  
Related Articles | Metrics
In recent years, reconfigurable system-on-chip has become a promising technical solution for complicated computation in scientific research and embedded implementations. In light of lacking a complete overview of the whole design flow for dynamically reconfigurable system, and a transparent programming process for system designers, this paper proposes a new design methodology based on function-level programming model at system level. In the programming model, designers can use high-level language to complete functional specification by calling the co-function-library. At detailed design level, an online real-time scheduling algorithm based on speed-up ratio is used to manage reconfigurable resources. At implementation level, the key technologies in implementing a prototype system are described. And experiments and tests have verified that the efficiency of system development is enhanced by placing run-time reconfiguration problem into the whole design flow.
Fast and Live Whole-System Migration of Virtual Machines
Zhang Xiang, Huo Zhigang, Ma Jie, and Meng Dan
2012, 49(3):  661-668. 
Asbtract ( 619 )   HTML ( 2)   PDF (2058KB) ( 349 )  
Related Articles | Metrics
Live and whole-system migration of virtual machines is important for virtual platforms in wide-area network and local-area network, in which shared storage is not deployed. However, the size of whole-system image is often tens of gigabytes, and transferring such much data will occupy too much I/O bandwidth and have serious time overhead. A fast migration method is proposed, which includes three key technologies: file-system-aware block device migration, which utilizes the allocation bitmap of disk blocks in file system to make migration only copy used disk blocks, and reduces half of the disk image data which is transferred during the first migration phase; Xor-based compression, which utilizes the self-similarity of image data to effectively reduce the amount of transferred data in the last two migration phases with low time overhead; parallel migration, which parallelizes each migration phase and overlaps the cost of reading, writing, compressing, uncompressing, sending and receiving image data. Experimental results demonstrate that compared with traditional compression migration, the fast migration method can significantly reduce 50% of migration time, 21.68% of downtime at the best time and 14.48% of downtime on average. It can also speedup migration under extreme conditions, under which network bandwidth is limited or workload in the virtual machine is intensive.
A High-Speed Delay-Independent Synchronous to Asynchronous Interface
Peng Yao, Yang Yintang, Zhu Zhangming, and Zhou Duan
2012, 49(3):  669-678. 
Asbtract ( 552 )   HTML ( 0)   PDF (3605KB) ( 294 )  
Related Articles | Metrics
This paper proposes a novel interface used in multi-processor system-on-chip and network-on-chip. The interface, which is implemented by the circular FIFO with threshold gate, removes the synchronous clock from sender. It resolves the problems of high power consumption induced by clock signal and low reusability of IP cores. With the transmission mode combining both serial and parallel communication, data of different widths can be transferred from synchronous sender to asynchronous receiver rapidly. Since the distributed framework is utilized, the data transport channel is separated from the transfer control block, as the synchronizer and writeread pointer. In this way, the different reliability of the interface can be satisfied by the interface during changing the number of synchronizer stages. And various asynchronous transport protocols are supported gracefully by the interface. While the two-rail encoding transfer manner is selected, the transmission is quasi-delay insensitive and the data integrity is ensured. Based on SMIC 0.18 μm CMOS technology, simulation results of 3 stages FIFO have shown that the delay is 613ps with the average energy consumption of 3.05pJ for one transfer request responded, which can satisfy the requirements of high speed, low power, strong robustness and good reusability in the design of multiprocessor SoC and network-on-chip.