Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 August 2014, Volume 51 Issue 8
Summary of Storage System and Technology Based on Phase Change Memory
Zhang Hongbin, Fan Jie, Shu Jiwu, Hu Qingda
2014, 51(8):  1647-1662.  doi:10.7544/issn1000-1239.2014.20131123
Asbtract ( 1223 )   HTML ( 8)   PDF (5502KB) ( 3739 )  
Related Articles | Metrics
With the increasing of performance gap between CPU and memory, the “memory wall” problem becomes more and more prominent. In order to bridge the gap, many DRAM based solutions are proposed. However, the DRAM is approaching the bottleneck in density and energy cost. How to design a practical memory architecture to settle this problem is becoming more and more prominent. Recent years, phase change memory (PCM) has gained great attention of researchers from domestic and abroad for its high density and low energy cost. And especially, its non-volatility and byte addressable feature are blurring the difference of memory and storage, which can bring significant changes for future memory architecture. This paper mainly discusses the architecture of main memory based on PCM and related technology about tolerating slow writes, ware leveling, erasure codes, reuses of failed blocks and software optimizing. And this paper also discusses the application of PCM in storage system and the affects on the design of storage architecture and computer system. After the discussion, the research works are summarized and the possible research directions are pointed out.
Original Paper
MDDS: A Method to Improve the Metadata Performance of Parallel File System for HPC
Chen Qi, Chen Zuoning, Jiang Jinhu
2014, 51(8):  1663-1670.  doi:10.7544/issn1000-1239.2014.20121094
Asbtract ( 927 )   HTML ( 0)   PDF (4216KB) ( 504 )  
Related Articles | Metrics
With the increasing of the computational ability of supercomputers, problem size and complexity targeted by applications, higher performance of I/O subsystems is required. While the throughput of single metadata server limited the performance of the parallel file system in high concurrent access and high-frequency file creating/deleting scenarios. Focused on the typical parallel I/O scenario in high performance computing, MDDS (meta data delegation service) is implemented in Lustre file system, which uses loose coupling to keep the high availability of the cluster, organizes the MDDS namespace by directory subtree to avoid the complexity and inefficiency of distributed atomic operations introduced by cross-node operations, and uses metadata migration mechanism to avoid objects data moving between data servers. Oriented to I/O-forwarding framework, two job-scheduling strategies, one job scheduled on single MDDS and jobs sharing multiple MDDS, are addressed to achieve load balancing of the requests for metadata inside or between jobs. The performance of MDDS is evaluated on 116 storage servers. The initial experimental results show that quasi linear scalable metadata performance is achieved by MDDS, and even show better scalability than Lustre CMD (cluster metadata Design) in larger-scale cluster. The two job-scheduling strategies distribute the applications' metadata access load effectively, and overcome performance bottlenecks in accessing file metadata in HPC.
Performance Study of Exact Minimum Bandwidth Regenerating Codes in Distributed Storage
Wei Dongsheng, Li Jun, Wang Xin
2014, 51(8):  1671-1680.  doi:10.7544/issn1000-1239.2014.20121095
Asbtract ( 1174 )   HTML ( 2)   PDF (2179KB) ( 587 )  
Related Articles | Metrics
Distributed storage systems need to introduce redundancy to ensure data reliability against node failures. To repair failed nodes, a significant amount of bandwidth is consumed. Regenerating codes are able to achieve the optimal tradeoff between the storage overhead and the repair bandwidth overhead. Based on the current situation that bandwidth resources are more precious than computing resources in distributed storage systems, exact minimum bandwidth regenerating (E-MBR) codes, which can be implemented by a product-matrix construction, enjoy the advantages of regenerating codes as well as systematic codes, and have no restrictions for all construction parameters, making themselves a promising candidate towards the application in distributed storage systems. However, the performance overhead of distributed storage systems based on this coding scheme has not been investigated and analyzed. This paper gives a formal description of coding operations, which can be categorized into three distinct phrases: uploading, downloading and repairing. We hereby analyze the impact of the CPU utilization, the file size, the buffer size and the Galois field size to the computing rates in the three distinct phrases above. We find that distributed storage systems based on E-MBR codes are able to achieve a high computing throughput if we configure the construction parameters of E-MBR codes appropriately.
Network Situation Prediction Method Based on Spatial-Time Dimension Analysis
Liu Yuling1,2,3, Feng Dengguo1,2, Lian Yifeng1,2,3, Chen Kai3, Wu Di1,2
2014, 51(8):  1681-1694.  doi:10.7544/issn1000-1239.2014.20121050
Asbtract ( 1225 )   HTML ( 4)   PDF (2262KB) ( 1091 )  
Related Articles | Metrics
Network security situation prediction methods can make the security administrator better understand the network security situation and the network situation trend. However, the existing security situational prediction methods can not precisely reflect the variation of network future security situation caused by security elements' change and do not handle the impact of the interaction relationship between the various security elements of future network security situation. In view of this situation, a network situation prediction method based on spatial-time dimension analysis is presented. The proposed method extracts security elements from attacker, defender and network environment. We predict and analyze these elements from the time dimension in order to provide data for the situation calculation method. Using the predicted elements, the impact value caused by neighbor node's security situation elements is computed based on spatial data mining theory. In combination with node's degree of importance, the security situation value is obtained. To evaluate our methods, MIT Lincoln Lab's public dataset is used to conduct our experiments. The experiments results indicate that our method is suitable for a real network environment. Besides, our method is much more accurate than the ARMA model method.
StegoP2P: A Hidden Communication Approach in P2P Networks
Tan Qingfeng1,2,4, Fang Binxing1,2,3, Shi Jinqiao1,2, Xu Fanwen3, Chen Xiaojun1,2,4
2014, 51(8):  1695-1703.  doi:10.7544/issn1000-1239.2014.20121202
Asbtract ( 1127 )   HTML ( 7)   PDF (1673KB) ( 1222 )  
Related Articles | Metrics
With the development of Internet, privacy-preserving has become an increasingly prominent problem. Existing anonymous communication systems, such as Tor and Freenet, can conceal who communicate with whom. However they can't hide the fact that the users are using the anonymous communication technologies. File share software, such as BitTorrent and emule, has become the most popular application in Internet with users all over the world. In this paper, we present StegoP2P, a peer-to-peer based hidden communication method, which doesn't rely on a single system or a set of entry points. It is based on embedding the steganographic marker in the peer-to-peer meta-data exchange protocol, unlike other existing covert communication methods that rely on timing channel, and requires time synchronization. An efficient covert handshake protocol with steganographic marker techniques over peer-to-peer networks is proposed for unobservable communications, which allows users in peer-to-peer networks to exchange information secretly for circumventing Internet censorship. The steganography makes it easy for users to find the targeted content and difficult for a censor to identify them. Experimental results and security analysis show that our system has high performance and can defense against certain traffic censorships.
A Polymorphic Shellcode Detection Method Based on Dual-Mode Virtual Machine
Luo Yang, Xia Chunhe, Li Yazhuo, Wei Zhao, Liang Xiaoyan
2014, 51(8):  1704-1714.  doi:10.7544/issn1000-1239.2014.20121149
Asbtract ( 870 )   HTML ( 4)   PDF (2885KB) ( 521 )  
Related Articles | Metrics
Malicious code such as virus and worm propagates and attacks via the Internet which compromises network securities seriously. By exploiting vulnerabilities of services running on a remote host, the malicious code can inject shellcode into the host and gain complete control over it. Recently, shellcode attacks tend to evade network detection by employing polymorphic techniques. However, previous detection methods lack the real-time performance, resilience against evasions and the consideration of distinguishing polymorphic shellcode from normal shell protected code. We propose an enhanced polymorphic shellcode detection approach based on dual-mode virtual machine. We firstly filter network traffic via an improved GetPC location mechanism and IA-32 instruction recognition method, subsequently implement a dual-mode virtual machine to execute the target instruction flow by transferring states of the finite automaton between control-flow mode and data-flow mode according to various decision conditions. A prototype system has been implemented for our approach and experimental results indicate that when detecting real network data mixed with the polymorphic shellcode samples, the approach we proposed succeeds in differentiating the polymorphic shellcode from shell protected software without losing its accuracy and its time complexity lies between the static analysis and dynamic emulation, which makes it an encouraging method for network attack detection.
Safe Variable-Capacity Self-Recovery Watermarking Scheme
Shi Hui1,2, Li Mingchu1
2014, 51(8):  1715-1726.  doi:10.7544/issn1000-1239.2014.20130232
Asbtract ( 609 )   HTML ( 1)   PDF (3169KB) ( 401 )  
Related Articles | Metrics
Since existing watermarking schemes usually cannot recover the tampered position, a safe variable-capacity self-recovery watermarking scheme is proposed. Both watermark embedding capacity and security are taken into account. The original image is divided into texture blocks and smooth blocks, and the texture blocks not only save traditional information, but also save the “details” information. The so-called “details” information refers to the texture information, which not only can effectively resist the mean attack, but also help to improve the quality of the restored image to meet the needs of practical work. And then according to the characteristics of different blocks, the different length compound watermarks are produced. The so-called “compound watermarks” include the authentication watermarks and information watermarks. Authentication watermarks are used to detect the tampered area, and the information watermarks which include basic watermark and additional watermark are used to restore image. Then the compound watermarks are inserted into the other blocks based on the new proposed scheme called three level secret-key embedding scheme (TLSES). And then the tamper blocks are detected and restored by the three level tamper detection scheme (TLTDS). The experimental results show that the scheme can not only accurately detect the tamper region and restore image, but also can effectively resist mean attack and collage attack.
Identity-Based Authenticated Asymmetric Group Key Agreement
Zhang Qikun1, Wang Ruifang1, Tan Yu'an2
2014, 51(8):  1727-1738.  doi:10.7544/issn1000-1239.2014.20121165
Asbtract ( 1084 )   HTML ( 6)   PDF (2045KB) ( 634 )  
Related Articles | Metrics
The asymmetric group key agreement (AGKA) protocol enables external users to securely send messages to group members. With the development of large-scale collaborative computing in distributed network, the members who participate in collaborative computing may come from different domains, different time zones and different cloud ends networks. Existing AGKA can not meet the security of information exchange among group members that come from cross-domain or heterogeneous network, and it is only secure against passive attacks which are too weak to capture the attacks in the real world. In this paper, we formalize an active security model for identity-based authentication asymmetric group key agreement (IB-AAGKA) protocol. Our protocol achieves an asymmetric group key agreement only one round, to resolve the problem that is hard to find a trusted party to serve as a dealer in a regular broadcast scheme, and is inconvenient to require all the parties in differences time zones to stay online concurrently to implement a (two-round or multi-round) regular GKA protocol. Our protocol can also achieve anonymous authentication. It supports the dynamic group key update of nodes for forward secrecy and backward secrecy of group key. Our protocol is proven secure under the decisional bilinear Diffie-Hellman (DBDH) problem assumption, and the performance analysis show that the proposed scheme is highly efficient.
±1 Steganographic Codes by Applying Syndrome-Trellis Codes to Dynamic Distortion Model in Pixel Chain
Bao Zhenkun1,2, Zhang Weiming1,3, Cheng Sen1,2, Zhao Xianfeng4,5
2014, 51(8):  1739-1747.  doi:10.7544/issn1000-1239.2014.20121213
Asbtract ( 1291 )   HTML ( 2)   PDF (2017KB) ( 518 )  
Related Articles | Metrics
Double-layered STC (syndrome trellis code) is the most popular method for minimizing the distortion of ±1 steganography. However, it is a probabilistic algorithm which may fail in the embedding process on some given profiles. Another characteristic of double-layered STC is the high computational complexity. Starting from these two points, we propose a dynamic distortion model defined in a pixel chain in this paper. The dynamic distortion model is working on a principle that the SLSB (second least significant bit) of current pixel is used to control the LSB (least significant bit) of the next pixel. So the distortion of some pixels may be adjusted to zero by this means. We apply STC to fit the dynamic distortion model and get a novel method for ±1 steganography. Comparing with the double-layered STC, the experiment result shows that the proposed method has comparable ability for minimizing distortion with significantly improved embedding speed. And this novel method avoids failure in the embedding process. Considering the advantages together, the method is more suitable for steganography systems and software in practical environment.
Dynamically Tolerating and Detecting Asymmetric Races
Wang Wenwen1,2, Wu Chenggang1, Paruj Ratanaworabhan3, Yuan Xiang1,2, Wang Zhenjiang1, Li Jianjun1, Feng Xiaobing1
2014, 51(8):  1748-1763.  doi:10.7544/issn1000-1239.2014.20130123
Asbtract ( 733 )   HTML ( 3)   PDF (4169KB) ( 455 )  
Related Articles | Metrics
Asymmetric races are a common type of data races. They are triggered when a thread accesses a shared variable in a critical section, and another thread accesses the same shared variable not in any critical section, or in a critical section guarded by a different lock. Asymmetric races in multi-threaded programs are usually harmful. To solve the problem introduced by asymmetric races, ARace is proposed. ARace utilizes shared variable protecting and write buffer to dynamically tolerate and detect asymmetric races. Shared variable protecting is used to protect shared variables that are read-only and read-before-write in critical sections, and these shared variables should not be modified out of critical sections; write buffer is used to buffer the writing operations to shared variables in critical sections. ARace can not only tolerate asymmetric races triggered by shared variable accesses in and out of critical sections, but also detect asymmetric races triggered by shared variable accesses in concurrent critical sections. ARace can be directly applied to binary code and requires neither additional compiler support nor hardware support. In addition, an implementation based on dynamic binary instrumentation is also proposed. The experimental results demonstrate that ARace guarantees the tolerance and detection of asymmetric races while incurring acceptable performance and memory overhead.
SER-Tvpack: An SER Estimation-Based Clustering Method for SRAM-Based FPGAs
Xia Jing1,2,3, Wang Tiancheng2, Lü Tao2, Li Huawei2, Kuang Jishun1
2014, 51(8):  1764-1772.  doi:10.7544/issn1000-1239.2014.20120970
Asbtract ( 928 )   HTML ( 0)   PDF (1599KB) ( 572 )  
Related Articles | Metrics
With the widely use of SRAM-based FPGA (SFPGA) in various fields, reliability becomes increasingly an important concern in SFPGAs. We propose an SER (soft error rate) estimation-based clustering method, namely SER-Tvpack, by adding SER as a reliability factor to the cost function. Combining EPP (error propagation probability) and estimated NER (node error rate) by using the existing ISPL metric, which has been shown to predict every post-placement wirelength accurately, we can estimate the NER factor and get the estimated SER in the clustering stage. According to the fact that the SER of inter-CLBs nets is much higher than that inside CLBs, SER-Tvpack reduces the soft fault rate (SFR) by the means of absorbing high SER nets into the CLBs as much as possible and leaving the low SER nets out of the CLBs. Experimental results show that the proposed SER-Tvpack reduces SFR by 14.5% compared with the baseline T-Vpack, while the previous F-Tvpack reduces SFR by 4.11%. Furthermore, it achieves better performance for reducing the critical path delay by 2.31% in comparison with F-Tvpack, with only 0.04% area overhead increase.
Bus-Invert Encoding Oriented Low Power Scheduling Method
He Yanxiang, Chen Yong, Wu Wei, Xu Chao, Li Qingan
2014, 51(8):  1773-1780.  doi:10.7544/issn1000-1239.2014.20130066
Asbtract ( 867 )   HTML ( 1)   PDF (1582KB) ( 461 )  
Related Articles | Metrics
Power consumption is an important aspect in the design of embedded systems. Aiming at the bus power consumption and balance usage that is one of the most important parts of total power, we propose a low power instruction scheduling method that orients to the bus-invert encoding. It can greatly improve the efficiency of bus-invert encoding. First, the method helps with the profile tool such as sim-profile to pick out the branch access frequencies of the program. And then the data random enhancement scheduling instruction (DRESI) is proposed to schedule the instructions for both entry instruction and intra instructions in basic block, which is guided by the branch frequencies information. It can not only reduce the frequency of the total bus inverts, but also balance the inverts between buses. Our experiments evaluated by the test cases of MiBench benchmark show that the method obtains considerable reduction in bus invert and more balance between buses. It can get about 26% on average bus invert reduction compared with original sequence of GCC with the two level (-O2) optimization and more than 10% reduction compared with the VSI+BI method. And furthermore, in the utilization balance of bus inverts, it also gets about 14.4% improvements on average.
A Personalized Web Service Recommendation Method Based on Latent Semantic Probabilistic Model
Hu Yan1,2, Peng Qimin1, Hu Xiaohui1
2014, 51(8):  1781-1793.  doi:10.7544/issn1000-1239.2014.20130024
Asbtract ( 986 )   HTML ( 0)   PDF (2592KB) ( 1082 )  
Related Articles | Metrics
In order to meet service users' personalized requirements, a latent semantic probabilistic model is proposed to predict users' criteria preferences for Web service recommendation. Users' criteria preferences are mainly affected by two key elements, users and their service situations. Firstly, the latent semantic relations among users, their criteria preferences and service situations are established with latent classes in this model. In order to describe multifaceted characteristics of users, service situations and users' criteria preferences, all of them are allowed to simultaneously belong to multiple latent classes with different probabilities. Afterwards, the expectation maximization algorithm and the consistent training data obtained by analytic hierarchy process are used to estimate the parameters of the latent semantic probabilistic model which contains latent variables. Finally, the trained model is employed to predict users' criteria preferences under specific service situations if users are unwilling to provide their criteria preferences due to lack of domain knowledge. The main advantage of the proposed latent semantic probabilistic model over the standard memory-based collaborative filtering and the collaborative filtering improved by clustering is an explicit and compact model representation. And the experimental results show that the algorithm based on the latent semantic probabilistic model can get higher prediction accuracy than both the standard and the improved collaborative filtering algorithms and can also alleviate the impact of data sparsity.
An Optimized Acquisition Algorithm in GPS Software Receiver
Cui Shaolong1,2, Yao Xiangzhen3, Fang Jinyun1
2014, 51(8):  1794-1801.  doi:10.7544/issn1000-1239.2014.20130042
Asbtract ( 752 )   HTML ( 1)   PDF (4975KB) ( 398 )  
Related Articles | Metrics
In GPS software receiver, there are massive correlating operations which cost huge time and hardware resources in the acquisition. The GPS software receiver uses the FFT algorithm to decrease process time obviously. However, this method is used limitedly on the embedded devices due to the limited hardware resources and time-consuming. Thus, this paper proposes an adaptive search algorithm. This algorithm makes a partition of different levels of steps and uses different steps to improve the efficiency of the acquisition. This method implements multi-level steps search in finding the 2-D spectrum peak of circular cross-correlations. In the search process, it preferres to big steps to reduce search operations as far as possible. Because the search process of every satellite uses multi-level steps other than the steps of conventional partition which is lowest level in the multi-level steps, it can remove FFT operations obviously. And to avoid unnecessary computation, there are some tables to restore the various levels of steps. This method reduces redundancy FFT operations in the acquisition and improves the efficiency of the acquisition. Experimental results show this method has improved the efficiency of the acquisition and decreased the process time significantly.
Similarity Matching for Uncertain Time Series
Wu Honghua1, Liu Guohua1,2, Wang Wei1
2014, 51(8):  1802-1810.  doi:10.7544/issn1000-1239.2014.20121055
Asbtract ( 997 )   HTML ( 1)   PDF (1721KB) ( 1344 )  
Related Articles | Metrics
Similarity matching techniques for certain time series do not consider the uncertainty of data, but in the real world the time series data collected by the sensors is often not certain, To solve this problem, we perform pre-processing over uncertain time series. It is divided into horizontal and vertical dimensions, that is, time dimension and probability dimension. First, an uncertain time series is compressed by the Haar wavelet transform. On this basis, we process the obtained uncertain time series longitudinally, and put forward a kind of method of electing representatives, which adopts maximum probability method and the mean method to select a certain time sequence. After pretreatment, we carry on the dimensionality reduction and indexing with generated certain time series. According to the query sequence and each time series in the database in the combination of uncertainty, we put forward the similarity matching algorithm corresponding to a combination of them respectively.
A Neighborhood Rough Sets-Based Co-Training Model for Classification
Zhang Wei1,2,3, Miao Duoqian1,3, Gao Can4, Yue Xiaodong5
2014, 51(8):  1811-1820.  doi:10.7544/issn1000-1239.2014.20131049
Asbtract ( 938 )   HTML ( 0)   PDF (2596KB) ( 506 )  
Related Articles | Metrics
Pawlak's rough set theory, as a supervised learning model, is only applicable for discrete data. However it is often the case that practical data sets are continuous and involve both few labeled and abundant unlabeled data, which is outside the realm of Pawlak's rough set theory. In this paper, a neighborhood rough sets based co-training model for classification is proposed, which could deal with continuous data and utilize the unlabeled and labeled data to achieve better performance than the classifier learned only from few labeled data. Firstly, a heuristic algorithm based on neighborhood mutual information is put forward to compute the reduct of partially labeled continuous data. Then two diverse reducts are generated. The model employs the two reducts to train two base classifiers on the labeled data, and makes the two base classifiers teach each other on the unlabeled data to boot the their performance iteratively. The experimental results on selected UCI datasets show that the proposed model are more effective to deal with partially labeled continuous data than some representative ones in learning accuracy.
Image Annotation by Semantic Neighborhood Learning from Weakly Labeled Dataset
Tian Feng1,2, Shen Xukun1
2014, 51(8):  1821-1832.  doi:10.7544/issn1000-1239.2014.20121087
Asbtract ( 947 )   HTML ( 1)   PDF (2796KB) ( 582 )  
Related Articles | Metrics
With the advance of Web technology, image sharing has become much easier than ever before. Automatic image annotation, which can predict relevant labels for images, is becoming more and more important. Traditional image annotation methods usually require a large number of complete, accurate labeled data to obtain good annotation performance. However, since obtaining weak labeled training data is often much easier and costs less efforts than obtaining a large amount of fully labeled training data, image labels are often incomplete and inaccurate in real world environment. In addition, different labels usually have large frequency differences. To effectively harness these weakly labeled images, in this paper, an automatic image annotation approach based on semantic neighborhood learning (SNLWL) is proposed. The missing labels are replenished by minimizing the reweighted error functions on training data. Then, the semantic neighborhood is obtained by a progressive neighborhood construction approach. We incorporate label completeness, global similarity, conceptual similarity, and partly correlation into the stage. In addition, an effective label inference strategy is proposed by minimizing the neighborhood reconstruction error to handle the noise in the labels. Extensive experimental results on different benchmark datasets show that the proposed approach makes a marked improvement as compared with other methods.
HMM Training Model Using Blending Population Diversity Based Aaptive Genetic Algorithm Title
Wang Xianghai1,2,3, Cong Zhihuan1, Fang Lingling1, Song Chuanming1,3
2014, 51(8):  1833-1844.  doi:10.7544/issn1000-1239.2014.20121211
Asbtract ( 802 )   HTML ( 0)   PDF (3039KB) ( 454 )  
Related Articles | Metrics
With the parameters sensibility problem of HMM model trained by GA method, this paper firstly introduces BPD-AGA algorithm based on blending population diversity. In order to make genetic iteration process enlarge the genetic searching space and improve quality of optimal solution, this algorithm adjusts the genetic parameters adaptively by individual genotype and phenotype which determine the population diversity. Secondly, HMM training model based on BPD-AGA has been proposed. This model selects the largest contribution to blending population diversity as the competition champion by BPD-AGA adaptive selection; On the other hand, HMM global optimal solution can be guaranteed by using BPD-AGA adaptive crossover and mutation, and enlarging the genetic searching space to protect the competition champion. Thirdly, this paper mixes Baum-Welch algorithm and BPD-AGA to increase the BPD-AGA training convergence speed, eventually produces a hybrid BPD-AGA/Baum-Welch model that can improve GA from external and internal. Fourthly, the process of the above model and training algorithm applied in traffic video vehicle driving status recognition has been presented. Experiment results prove the effectiveness of the proposed model and training algorithm.
To Solve Maximum Flow Problem of Network with a Layering Method
Zhao Shu, Su Jianzhong, Liu Qianqian, Zhang Yanping
2014, 51(8):  1845-1853.  doi:10.7544/issn1000-1239.2014.20130184
Asbtract ( 657 )   HTML ( 1)   PDF (1764KB) ( 574 )  
Related Articles | Metrics
Maximum flow problem is a classical combinational optimization problem. With the growing of the network scale, it is the key to improve the efficiency of the algorithm for the maximum flow problem. For a given single-source single-sink network, a novel layer method for getting its maximum flow is proposed in this paper. Firstly, we transform the original directed network into a layer network. Then we compute the maximum flow for each sub-network which contains two adjacent levels of nodes in the layer network. Finally, the minimum value of each sub-network's maximum flow is regarded as the estimated maximum flow of the original network. The experimental results on different networks show the efficiency of the proposed method. The maximum flow error is only about 1% averagely. Meanwhile, the running time of our method is only 11% of the classical algorithm (the Ford-Fulkerson algorithm) averagely. And in the best condition, the running time of our method is 2% of the classical algorithm and 25% of the improved two-phase capacity scaling algorithm.
Moveable Bubble Flow Control and Adaptive Routing Mechanism in Torus Networks
Wang Yongqing, Xie Lunguo, Fu Qingchao
2014, 51(8):  1854-1862.  doi:10.7544/issn1000-1239.2014.20121097
Asbtract ( 677 )   HTML ( 2)   PDF (2557KB) ( 429 )  
Related Articles | Metrics
Bubble flow control is an efficient and buffer occupancy-based technique to avoid deadlock in torus networks. Critical bubble scheme can avoid intra-dimension deadlock with just one packet buffer by marking and tracking as “critical” a certain number of free packet-sized buffers, but has a risk of blocking. A false packet protocol and the design of a non-blocking moveable bubble scheme are presented to solve the block induced by critical bubble. False packet protocol adopts simple request-acknowledge, and the whole scheme is implemented on typical credit flows, and no other special requirement is needed. Unlike the typical bubble scheme, which is locally aware, this scheme is also an efficient implementation of a globally-aware flow control mechanism. Combing an escape channel based on moveable bubble scheme and a fully adaptive channel brings a fully adaptive router with minimal two virtual channels, with one packet buffer per virtual channel. To show the advantage of this scheme, the performance of various bubble-based schemes is compared. Network simulation results show that the moveable bubble scheme outperforms traditional bubble scheme, whereas adaptive scheme performs apparently better than the other non-adaptive methods. It avoids blocking, leads to much lower average packet latency, and displays a throughput improvement of more than 20%, maximally up to 100%.
Optimizing All_to_All Communication in Infiniband
Chen Shuping, Lu Deping, Chen Zhongping
2014, 51(8):  1863-1870.  doi:10.7544/issn1000-1239.2014.20121117
Asbtract ( 724 )   HTML ( 0)   PDF (3516KB) ( 381 )  
Related Articles | Metrics
All_to_All operation is an important collective function. No effective congestion control mechanism exists in current commercial Infiniband networks. Two typical All_to_All algorithms, the make-pair algorithm and the post-all algorithm, are studied in this paper. We found that their utilization of network bandwidth were between 30% and 70% when sending the messages which are larger than 32KB. Further analysis and experiments demonstrate that it is the result of heavy congestion in the networks. In this paper, we adopt a novel algorithm to alleviate the congestion. It splits the large message into many small sized messages and sends them independently by an efficient schedule scheme. It creates one reliable connection for each process pair, and maintains a counter for each connection, which counts for the outstanding send requests. When the counter exceeds a predefined threshold value, congestion occurs between the two processes. Then it pauses the posting of send requests to the send queue of the congested connection in order to avoid the congestion spreading out through the entire network. The experiment results demonstrate that the new algorithm can alleviate the congestion effectively and improve All_to_All operation performance. Compared with the existed algorithms, its utilization of network bandwidth can improve 10% at least and 20% at most.
A Complex Event Detection Algorithm Based on Correlation Analysis
Shi Shengfei, Zhang Wei, Li Jianzhong
2014, 51(8):  1871-1879.  doi:10.7544/issn1000-1239.2014.20120813
Asbtract ( 823 )   HTML ( 1)   PDF (2565KB) ( 576 )  
Related Articles | Metrics
In wireless sensor networks, the traditional event detection algorithm has the problem of high network transmission and energy consumption. In order to solve such problem, this paper presents a new complex event detection algorithm based on pattern matching and correlation analysis. Complex events not only change the data of a single node, but also be the impact of the entire sensor network. So it is often manifested as a particular pattern at the sensor node and the correlation between nodes satisfy some relationship. According to the characteristics of complex event showed, event detection is divided into two parts: the feature detection at single node and the correlation analysis between multiple nodes. Calculation pressure is allocated to each node by the feature extraction and pattern matching. So the algorithm can control the frequency of the calculation of the correlation analysis, and it will extend the network lifetime. In order to reduce the amount of network traffic, the compressed data is used for correlation analysis. The experiments show that the proposed algorithm can effectively reduce energy consumption.
A Distributed Underwater Target Detection Algorithm Based on Window Statistics
Liu Yihai, Zhang Xiaomin, Shen Yinjun, Yu Yang
2014, 51(8):  1880-1887.  doi:10.7544/issn1000-1239.2014.20121084
Asbtract ( 837 )   HTML ( 3)   PDF (2430KB) ( 415 )  
Related Articles | Metrics
For an underwater sensor network (USN) with randomly deployed sensors, local sensors' time-varying detection performance and shortage of the intrusion target priori model, a distributed target fusion detection algorithm is proposed based on optimal window statistics. This algorithm focuses on the practical passive underwater target detection that only the sensors surrounding the vessel target in a small zone could provide stable local detection results. Thus the detection process is carried out with a moveable virtual window which fuses the binary decisions reported by local sensors inside its coverage with the counting fusion rule. Finally the detection of this particular subarea with the largest number of fusion sensor report “1” is equivalent to that of the point target. Compared with the point target detection problem, the extended area detection is more robust and reliable. The approximate detection probability of the system level is derived analytically.Simulation methods are also employed to compare the application performance between the proposed algorithm and the existing nonparametric voting or counting fusion rules under practical scenarios. Results illustrate that this fusion rule can perform better in system-level detection compared with the existing one, as long as the scanning window size approximatly matches the target radiation signal region.