Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2011, Volume 48 Issue 3
Theory and Algorithms on Localization in Wireless Sensor Networks
Wang Xiaoping, Luo Jun, and Shen Changxiang
2011, 48(3):  353-363. 
Asbtract ( 670 )   HTML ( 3)   PDF (1323KB) ( 1553 )  
Related Articles | Metrics
As a basis of many network protocols and applications, localization is one of the major supportive technologies for wireless sensor networks, making it an indispensable part in the design of wireless sensor networks. In this paper, we comprehensively survey the recent advancements of the theory and algorithms on localization. We completely summarize the following aspects on localization theory. Firstly, we describe the formal definition of localization problem. Then, we analyze the computational complexity of the localization problem under various configurations. Next, we introduce the localization theory, which is developed from the rigidity theory. Finally, we present a special kind of location computable network: sequentially localizable network. In a global view, the study of localization theory is the basis of all the achievements, which not only shows great insights on the localization problem, but also further solves many fundamental problems of localization, especially in two-dimensional case. By localization theory, a lot of new localization algorithms are proposed. We also conclude the typical localization algorithms related to localization theory. We place special emphasis on their design ideas, as well as the application scope and disadvantage, for each localization algorithm. Finally, we point out the possible directions in both localization theory and localization algorithm for further research.
Paper
Localization Algorithm in Complex Area
Huang He, Chen Guoliang, Sun Yue, Xiao Mingjun, and Huang Liusheng,
2011, 48(3):  364-373. 
Asbtract ( 497 )   HTML ( 0)   PDF (2540KB) ( 437 )  
Related Articles | Metrics
Traditional wireless sensor network localization algorithms are generally based on the assumption that there is a mapping function between measured distance and Euclidean distance for pair of wireless sensor nodes. This assumption however would not hold when wireless sensor networks are deployed into complex areas. Thus, directly applying traditional algorithms to these networks would result in a large localization error. To solve the localization problem in the anisotropic wireless sensor networks deployed in complex areas, a range-free localization algorithm based on convex-hull partitioning (CHP) is proposed. At first, all the reference nodes are divided to form different convex-hulls in the CHP. And then, each unknown node determines which convex-hull it belongs to. Finally, each unknown node computes its own location based on reference nodes in the convex-hull it belongs to. The CHP algorithm can effectively reduce the localization errors incurred by the boundary or barrier factors of complex area through theoretical analysis. The results from extensive simulations show that compared with traditional algorithms, the CHP algorithm significantly reduces the localization errors and error jitters. At the same time, the proposed CHP localization scheme minimizes the unfavorable effects brought by the boundaries or barriers of the complex area in the executing process.
A Construction Technique of Constant Degree P2P Systems towards Efficient Complex Queries
Wang Xiaohai, Peng Yuxing, and Li Dongsheng
2011, 48(3):  374-381. 
Asbtract ( 482 )   HTML ( 0)   PDF (1973KB) ( 382 )  
Related Articles | Metrics
Constant degree peer-to-peer (P2P) system is turning into the P2P domain’s promising hotspot due to constant degree digraphs having good propertis. However, it is often hard to convert a stardard constant degree digraph to a DHT schema. Thus, most researches focus on DHT’s construction and maintenance, while leaving optimization and supporting to complex query behind. Underlying topology affects upper-layers’ character a lot. For constant degree P2P topologies, their inherent property makes a constant degree P2P system built using classical technique be poor in the data locality, and unfit for efficient, low-cost complex queries. Aiming at this shortage, a general-purpose construction technique towards efficient complex queries is proposed, which adds an embedding transformation layer between data layer and DHT overlay. In this way, adjacent data are stored in overlay’s adjacent peers and the data locality is improved, so that the number of peers referred in complex queries can be minimized with a limited time overhead. To validate this technology, the first constant degree P2P system based on Kautz digraph FissionE is reconstructed as a typical example, which includes re-allocating of resources, query algorithm and locality maintenance strategies. Experimental results show that this construction technique can ensure data locality, reduce query cost and lead to systems’ efficiency without changing the underlying DHT layer.
Analysis of Free-riding Behaviors and Modeling Restrain Mechanisms for Peer-to-Peer Networks
Yue Guangxue, Li Renfa, Chen Zhi, and Zhou Xu
2011, 48(3):  382-397. 
Asbtract ( 566 )   HTML ( 0)   PDF (4157KB) ( 509 )  
Related Articles | Metrics
In a real peer-to-peer (P2P) network, large amounts of network measurement results show that free riding is prevalent in almost all P2P reliable streaming media networks, which reduces the robustness, availability, service response speed, and lifetime of P2P reliable streaming media networks. Research of the reasonable and effective mechanisms to prohibit free-riders and incite selfish nodes to contribute more to the system has become an important direction for application research of P2P reliable streaming media network. Analysis by the intrinsic characteristics of free-riding and the related impacts on system performance, the behaviors of P2P reliable streaming media nodes are modeled and an idea with the goal of keeping moderate safety by allowing some errors but no crimes is introduced without sacrificing overall performance. Furthermore, the game theory is used to restrain free-riders and encourage them to be more altruistic. Reputation, contribution, and revenue of each node are adopted as metrics to assess the model. And the existence of Nash equilibrium for the model is proved; the rules, constraints, and a detailed analysis of it are given as well. Simulations show that the proposed model is effective in countering free-riding behavior, improving the performance and quality of service (QoS) of the P2P reliable streaming media network. It is able to keep relative balance.
An Incentive-Cooperative Forwarding Model Based on Punishment Mechanism in Wireless Ad Hoc Networks
Wang Bo, Huang Chuanhe, Yang Wenzhong, Dan Feng, and Xu Liya
2011, 48(3):  398-406. 
Asbtract ( 506 )   HTML ( 0)   PDF (1497KB) ( 545 )  
Related Articles | Metrics
Due to the limited processing ability, storage and energy of mobile nodes in wireless Ad hoc networks, nodes always conserve their scare resources to show the selfish behavior. So stimulating the cooperation behaviors among nodes to actively forward packets is becoming an important research topic. According to the idea of classic game theory, this paper firstly proposes a one-step game model to analyze the payoff matrix between neighbor nodes, and extends the model to an infinite-repeated game on cooperated forwarding packets to enhance the collaboration behavior, and illustrates three punishment strategies towards behaviors of selfish nodes (one-step punishment strategy, severe punishment strategy and a general punishment strategy), and then derives the corresponding incentive cooperation forwarding conditions. Nevertheless, in this paper, we focus on the general punishment mechanism for consideration. Finally, to verify the correctness and effectiveness of the scheme and mechanism mentioned above, this paper implements this scheme and compares its performance with classic AODV protocol by using NS2, Moreover, displays the incentive-cooperative executing process of different utilities of selfish nodes during the simulation time. Simulation results show that this scheme can enhance cooperation effectively, improve throughput among the nodes, prolong the lifetime of the network and increase the expected total payoff of all nodes.
Indexing Based Multi-Level Clustering Routing Algorithm in Public Transportation Delay Tolerant Networks
Li Zhi, Zha Xuanyue, Liu Fengyu, and Zhang Hong
2011, 48(3):  407-414. 
Asbtract ( 397 )   HTML ( 0)   PDF (1535KB) ( 554 )  
Related Articles | Metrics
In the wireless networks which consist of vehicles of the public transportation system, the topology changes rapidly and the links between nodes connect intermittently due to the high mobility. It is a classic application scenario of delay tolerant networks (DTN). The characteristic of node in the public transportation system determines that its mobility (such as time and path) follows a special rule. Based on this characteristic, an abstract topology of DTN is presented and a new communication efficiency weight is defined. It can provide more accurate evaluation standard for describing the communication capability between nodes pair than before. By computing the communication efficiency weight of nodes, the nodes which have similar mobility model can be recognized. An indexing-based multi-level clustering algorithm is proposed to cluster the nodes which have stronger communication capability with each other. A novel DTN routing algorithm is designed on this cluster construction, in which the cluster information of each node can provide reference for the routing algorithm to forward packets when two nodes encounter. Experiments based on the pre-cluster information prove that the proposed routing algorithm is more efficient than other DTN routings used in the high speed public transportation system.
Virtual Monotonic Counters Using Trusted Platform Module
Li Hao, Qin Yu, and Feng Dengguo
2011, 48(3):  415-422. 
Asbtract ( 666 )   HTML ( 1)   PDF (1091KB) ( 474 )  
Related Articles | Metrics
Any security storage system needs to address at least three security issues: confidentiality, integrity and freshness. Of these, freshness is the most challenging problem. However, the traditional software-based solutions themselves are on the storage device, such as a hard disk. Hence, they can not solve the problem. The attacker can replay the whole disk data using an “out-of-date” image of hard disk. Thus, the only solution to this problem would be to employ some form of irreversible state change. In this paper, we analyze the problem of replay attacks upon storage, and propose a TPM-based solution to build virtual counters, in order to defend against replay attacks. In this solution, we build a virtual counter manager (VCM) with three mechanisms in TPM: TPM Counters, transport sessions and protection of private keys; and then we can create and manage lots of trusted virtual counters with VCM. Furthermore, an algorithm for checking malicious operations of VCM is presented in order to ensure the trust of it. Hence, the security of our solution just depends on the tamper-resistant module TPM. Finally, the performance of our solution is analyzed, and two changes are proposed to improve the performance in order to keep the solution of anti-replay attacks feasible.
Robust Covert Timing Channel Based on Web
Qian Yuwen, Zhao Bangxin, Kong Jianshou, and Wang Zhiquan
2011, 48(3):  423-431. 
Asbtract ( 419 )   HTML ( 2)   PDF (1320KB) ( 581 )  
Related Articles | Metrics
In order to solve the problem that the covert timing channel works unstable in the Internet, the model of a robust covert timing channel (RCTC) running on the Web by using HTTP protocol is proposed. In the model, the inter packets delay is used to transmit covert information, and the acknowledge packet of HTTP protocol works as a confirmation of the covert information, which forms a bidirectional covert channel. A reliable communication protocol, which keeps the transmitter and the receiver of the covert information to be synchronization, is designed to ensure the stability of RCTC. To improve the efficiency of covert channel, the encode way of covert information is analyzed, and the scheme of “2-bits to one inter packets delay” is adopted. The capacity of RCTC is deduced based on queue theory. The experimental environment of RCTC in the Internet is constructed and several experiments of covert communication with the channel are conducted. The results show that the capacity of RCTC is about 11 times of that of traditional timing channel, the robustness of the channel is much better than that of traditional timing covert channel, and the channel can maintain reliable even when the quality of network communication is poor.
A Q-Learning Based Real-Time Mitigating Mechanism against LDoS Attack and Its Modeling and Simulation with CPN
Liu Tao,, He Yanxiang, and Xiong Qi,
2011, 48(3):  432-439. 
Asbtract ( 556 )   HTML ( 0)   PDF (2444KB) ( 432 )  
Related Articles | Metrics
Different from traditional DoS attacks, low-rate DoS (LDoS) is stealthy, periodic and low-rate in attack volume, and is very hard to be detected and defended in time. Regarding these features of LDoS attack, we present a real-time mitigating mechanism based on Q -learning for LDoS attack. Taking the adaptation control system as the target of protecting, a Q -learning module implemented with BP-neutral network, which takes characteristic parameters extracted periodically from network as its input, is used to make choice of the best defense measures. The selected defense measure then is carried out by the victim system. Defense measures are designed based on dynamic service resource allocation. The mitigating mechanism adjusts the service capability of system in real time according to the system running state, so as to ensure the service quality offered to normal service requests. Finally, the attack scenario and whole mitigating process are modeled and simulated by CPN and BP neutral networks. And simulated results shows that our mitigating mechanism can relieve the effect of LDoS attack on victim system efficiently with high sensitivity.
A Parallel Algorithm of Three-Dimensional Fast Fourier Transform
Fang Wei, Sun Guangzhong, Wu Chao, and Chen Guoliang
2011, 48(3):  440-446. 
Asbtract ( 1222 )   HTML ( 3)   PDF (1610KB) ( 580 )  
Related Articles | Metrics
Three-dimensional fast Fourier transform(3D-FFT) is widely used in physics. It is crucial to many applications because it demands heavy calculation and communications. Thus in most cases it is 3D-FFT that dominates the computational time. The traditional parallel algorithms of 3D-FFT are not suitable for the sparse lattice which is often encountered in the field of quantum computing, because the block partitioning used may involve many redundant computing and communications, due to the sparse of non-zero elements in FFT grid. In this paper we propose a noval parallel algorithm of 3D-FFT. Unlike the previous methods, the new algorithm uses slice partitioning, and redesigns the computing order in order to minimize the calculation time and communication cost. Taking advantage of the slice partitioning, the new method are highly scalable and can automatically satisfy the demands of load balancing. We compare it with traditional algorithms in theory and in practice. Theoretical performance analysis shows that the new method can greatly reduce the computational time and increase parallel speedup. The experiments have been carried cut in some high-performance machines, such as KD-50, IBM JS22 and DAWNING. The results show that our new algorithm behaves much better than traditional algorithms in performing 3D-FFT for sparse lattice.
A Molecular Solution for the Ramsey Number on DNA-Based Supercomputing
Li Kenli, Guo Li, Tang Zhuo, Jiang Yong, and Li Renfa
2011, 48(3):  447-454. 
Asbtract ( 593 )   HTML ( 0)   PDF (892KB) ( 419 )  
Related Articles | Metrics
Ramseys theorem is a foundational result in combinatorics, which adopts many technologies in each embranchment of mathematics. Its conclusions are very important in set theory, logic, analysis, algebra and so on. But the Ramsey number problem is one of the most difficult problems in mathematics, and there are only 9 Ramsey numbers that have been solved. The objective of this paper is to solve the Ramsey number problem. We propose an improved DNA computing model based on the biological operations in the Adleman-Lipton model, the solution space of stickers in the sticker-based model, and the add-bit-sequence coding method proposed by Xu Jin, et al. According to the proposed algorithm, we can obtain lower bound and upper bound by means of specific mathematical formulas which already exist firstly. For each number from lower bound to upper bound, we build the initial answer space. Then, we remove the DNA chains which accord with special qualifications. Finally, we detect the final test tube to confirm whether the current number is a Ramsey number or not. The theoretic analysis and simulated experiments show that the solution to the difficult Ramsey number problem is possible in acceptable time, provided that the technology of DNA biology is mature enough in the future.
Total-Idle-Time Increment Based Hybrid GA for No-Wait Flowshops with Makespan Minimization
Zhu Xia, Li Xiaoping, and Wang Qian
2011, 48(3):  455-463. 
Asbtract ( 440 )   HTML ( 2)   PDF (1561KB) ( 483 )  
Related Articles | Metrics
No-wait flowshops with makespan minimization has been proved to be a kind of NP-hard combinatorial optimization problem. To solve this problem, it is equivalently transferred into the problem on total-idle-time minimization in this paper. Different from traditional methods in which objectives are completely computed for a new generated schedule, total-idle-time increment methods are presented in this paper. Whether a new schedule is better or worse than the original one is judged just by the total-idle-time increment, which can reduce computational time considerably. Total-idle-time increment properties are analyzed for fundamental operations of heuristics and evolutional operators. Based on the properties, a fundamental method is introduced for fast evaluating schedules. IHGA (increment based hybrid genetic algorithm) is proposed for the considered problem. In IHGA, the different initialization methods and evolution operators are constructed. The dynamic upgrade strategy on evolution operators probability, the population convergence judgment and also regeneration mechanism are designed to improve the effectiveness of the algorithm. In addition, an iterative improvement procedure is integrated in IHGA as a local search method to further improve the eminent solution in population. IHGA is compared with the best-so-far algorithms RAJ, GR, SA2, TSM and FCH on 120 classical benchmark instances. Computational results show that IHGA outperforms the others in effectiveness. IHGA is better than SA2 and TSM while a little worse than GR, RAJ and FCH in efficiency.
Convergenceness of a General Formulation for Polynomial Smooth Support Vector Regressions
Xiong Jinzhi, Xu Jianmin, and Yuan Huaqiang
2011, 48(3):  464-470. 
Asbtract ( 524 )   HTML ( 0)   PDF (669KB) ( 453 )  
Related Articles | Metrics
Lee et al. proposed a smooth ε-support vector regression (ε-SSVR) in 2005, and Xiong et al. proposed a polynomial smooth ε-support vector regression(ε-PSSVR) in 2008, which improved the performance and efficiency of regression. However, problems still exist in looking for a general formulation of the polynomial smooth ε-support vector regressions and proving its convergenceness. Therefore, using a class of polynomial functions as new smoothing functions, the polynomial smooth model ε-PSSVR is extended to a general case; and a dth-order polynomial smooth ε-support vector regression (ε-dPSSVR), which is a general formulation of polynomial smooth ε-support vector regressions, is proposed using the smoothing technique in this paper. The global convergence of ε-dPSSVR is proved by mathematical inductive method. The research concludes that: 1) there are an infinite number of polynomial smooth models for support vector regression, which can be described in a general formulation; 2) the general formulation is global convergent, and the upper bound of the convergence is reduced about half an order of magnitude to ε-SSVR. The convergence problem of general formulation is successfully solved, which supplies a basic theoretical support for researching polynomial smooth ε-support vector regressions.
Voice Activity Detection Based on Sample Entropy in Car Environments
Zhao Huan, Wang Gangjin, Hu Lian, and Peng Xiujuan
2011, 48(3):  471-476. 
Asbtract ( 530 )   HTML ( 1)   PDF (1527KB) ( 596 )  
Related Articles | Metrics
One of the key issues in practical speech processing is to precisely locate endpoints of the input utterance to be free of non-speech regions. Although lots of studies have been performed to solve this problem, the operation of existing voice activity detection (VAD) algorithms is still far away from ideal. This paper proposes a robust feature for VAD method in car environments based on sample entropy (SampEn) which is an improved algorithm of approximate entropy (ApEn). In addition, we adopt fuzzy C means clustering algorithm and Bayesian information criterion algorithm to estimate the thresholds of the SampEn characteristic, and use dual thresholds method for VAD. Experiments on the TIMIT continuous speech database show that, in the car noise environments, the detection accuracy of SampEn and ApEn are both much higher than that of spectral entropy (SE) and energy spectral entropy (ESE). SampEn method has better detection performance than ApEn, especially when the SNR is not more than 0dB, and SampEn method detection performance is superior to ApEn nearly 10%. Therefore, the SampEn method has a good application prospect in automotive field and can provide accurate VAD techniques for car navigation.
An Adaptive Particle Level Set Method
Zhang Guijuan, Zhu Dengming, Qiu Xianjie, and Wang Zhaoqi
2011, 48(3):  477-485. 
Asbtract ( 476 )   HTML ( 1)   PDF (2960KB) ( 531 )  
Related Articles | Metrics
Fluid animation is one of the most desirable techniques in computer graphics and virtual reality. As these phenomena contain highly complex behaviors and rich visual details, it is difficult to deal with the complex motion of the water-air interfaces. Therefore, capturing the interface accurately and efficiently is a key issue in fluid animation. In order to address the problem of high numerical diffusion and low efficiency in traditional methods such as level set and particle level set algorithms, an adaptive particle level set method is presented. The particle placement in our approach is modeled as a stochastic process. Desirable goals are then achieved by allocating more computational resource to regions of high numerical dissipation during animation heuristically. In order to derive the optimal-rules of computing the stochastic process, we construct an importance sampling model and evaluate the volume loss in computational domain. The probability density function (PDF) of particle placement is obtained by employing a geometrical-based sampling algorithm which adopts a novel local feature size function on narrow band level set. The efficiency of computing this stochastic process is further improved according to the definition of accumulated shape deformation. Experiments show that the proposed approach provides high quality and low cost both in numerical tests and water animation applications.
Importance-Sampling-Based Real-Time Algorithm for All-Frequency Shadows Rendering in Dynamic Scenes
Li Shuai, Ran Jiao, Hao Aimin, and Wang Zhen
2011, 48(3):  486-493. 
Asbtract ( 489 )   HTML ( 0)   PDF (1899KB) ( 470 )  
Related Articles | Metrics
Since accurate rendering of soft shadows under environment lighting requires expensive computation to determine visibility from each pixel to hundreds of lights along omnidirectional directions, it brings great challenge to real-time rendering algorithm. Algorithms based on precomputation can provide accurate, real-time solutions, but it is hard to extend to dynamic scenes. Besides they have some inherent problems, for example, they usually need too much time on precomputation, cost too much memory, and can only be applied to limited types of light, etc. This paper proposes an importance-sampling-based real-time algorithm for all-frequency shadows rendering in dynamic scenes, which does not require any precomputation. Firstly, we sample some important lights which are decisive to the soft shadows from environment map; secondly we simplify the scene by geometric sampling so that we can respectively create a shadow map for each sampled light using only one rendering pass; thirdly we approximate the shadow test by using exponential function; and finally we accelerate the calculation by BRDF based on importance sampling. As a result, we are able to realize the real-time rendering of all-frequency shadows and environment mapping of dynamic scenes. Experiments demonstrate that the algorithm can support dynamic environment lights, dynamic objects and dynamic BRDFs; meanwhile it also can achieve real-time rendering results which are comparable to PRT algorithm.
An Optimized Motion Estimation Algorithm Based on Macroblock Priorities
Lu Jiyuan, Zhang Peizhao, Duan Xiaohua, and Chao Hongyang
2011, 48(3):  494-500. 
Asbtract ( 454 )   HTML ( 0)   PDF (940KB) ( 406 )  
Related Articles | Metrics
In order to deploy highly efficient video coding technologies on different hardware platforms, computational complexity controllable algorithms are required. These algorithms should not only provide controllable computational complexity but also attain high coding performance under different computational complexity constraints. Since one of the significant resource consumers is motion estimation (ME), we propose herein a computational complexity controllable ME algorithm. Firstly, we give an optimized frame level ME algorithm with adjustable computational complexity by constructing a model of distortion and computational complexity. The ME priority for each macroblocks (MBs) is the priority of distortion-complexity slope (DC-slope) which can be predicted by our proposed distortion and computational complexity model. The computational complexity of this algorithm can be adjusted by setting the number of MBs on which ME should perform. Secondly, an ordinary differential equation is being used to describe the relation between the parameter and the computational complexity of our algorithm. Then we give an efficient technique to adjust the parameter of the algorithm to meet any computational complexity constraints induced by the differing hardware platforms. According to our experimental results, the proposed algorithm precisely controls the complexity of motion estimation. Besides, our algorithm achieves better coding performance compared with other motion estimation algorithms.
Minimum Spanning Tree Fusing Multi-Feature Point Information for Medical Image Registration
Zhi Lijia, Zhang Shaomin, Zhao Dazhe, and Zhao Hong,
2011, 48(3):  501-507. 
Asbtract ( 546 )   HTML ( 0)   PDF (1403KB) ( 807 )  
Related Articles | Metrics
Medical image registration is a fundamental task in image process, and widely used for diagnosing disease, panning treatment, guiding surgery and studying disease progression. For medical image registration of high robustness, high accuracy and speed requirements, this paper proposes a minimum spanning tree (MST) algorithm of fusing multi-feature point information for medical image registration. This algorithm extracts three kinds of feature points from image: Harris-Laplace points, Laplacian of Gaussian points, and grid points. Then genetic algorithm is used for point selection; and by choosing appropriate cost function, the redundancy can be reduced in a great measure. The selected points are then used in building a vertices set of undirected complete graph by location-mapped method. Finally, MST is constructed by modified Kruskal algorithm, which estimates Rényi entropy directly. The new algorithm has solved the low robustness brought by the instability of extraction of feature points and the speed bottleneck problem when using MST to estimate the Rényi entropy. Experimental results show that in the images with noise, non-uniform intensity and large scope of the initial misalignment case, the proposed algorithm achieves better robustness and higher speed while maintaining good registration accuracy, compared with the conventional area-based and feature-based registration methods.
Affine Invariant Feature of Multi-Scale Maximally Stable Extremal Region
Luo Ronghua, Min Huaqing, and Chen Cong
2011, 48(3):  508-515. 
Asbtract ( 611 )   HTML ( 0)   PDF (1353KB) ( 514 )  
Related Articles | Metrics
Affine invariant features based on local region are widely used for object recognition, scene classification and image retrieval in the field of computer vision. Among the proposed affine invariant region features, maximally stable extremal region (MSER) has been proven to have attractive properties in several aspects. But since MSER is extracted from one single scale image, MSER will become unstable when the image is blurred due to the change of scale. To solve this problem, an innovational affine invariant feature, which is maximally stable both in image space and scale space, is designed with a criterion to evaluate the stability of extremal regions in scale space. And a fast algorithm is also proposed for extracting extremal regions in the scale space. At the same time, according to the property that the extremal regions can describe the contour of feature fairly well, a new kind of feature descriptor is designed by combining the local gray grads and the shape information. The proposed affine invariant feature with its descriptor is called multi-scale maximally stable extremal region (MMSER) feature. The experimental results prove that MMSER is much more stable and discernable than MSER under different affine transform conditions, and that the computational time of the designed descriptor is about 45% of that of SIFT descriptor.
The Matching Share Allocation Strategy of Rename Registers for Redundant Multithreading Architectures
Yin Jie and Jiang Jianhui
2011, 48(3):  516-527. 
Asbtract ( 510 )   HTML ( 0)   PDF (3405KB) ( 347 )  
Related Articles | Metrics
The simultaneous multithreading technique permits multiple issues from different threads simultaneously, which provides nature support for fault-tolerance by executing threads redundantly. Redundant multithreading (RMT) copies each thread into two copies that performances independently, and the corresponding results from the two threads are compared for detecting or tolerating faults. Most RMT structures adopt ICOUNT thread scheduling, which may suffer from long instruction latency. Long latency instructions may stay in critical resources (such as rename register file, instruction queue) for a long time with no contribution to the performance, and the other threads cannot obtain enough resources at the same time. So critical resources cannot be used efficiently, which causes the throughput and the single thread performance degradation. In the paper, a novel rename register allocation method called matching share (MS) is proposed. In N-contexts processor, the rename register file is divided into N copies, one of which is shared by a master thread and its corresponding slave thread, which may avoid the competition of rename register among different threads, and the sharing between master threads and slave threads avoids the drop of resource utilization. The experimental results show that the proposed MS strategy enhances both throughput and performance of single thread.
Optimized Injecting Method of Software-Implemented Fault Injection Technique
Pan Qinghe and Hong Bingrong
2011, 48(3):  528-534. 
Asbtract ( 377 )   HTML ( 0)   PDF (820KB) ( 513 )  
Related Articles | Metrics
As an analysis method of software dependability, software-implemented fault injection (SWIFI) has been concerned and studied for a long time. Many injectors have been designed to implement injecting experiments. These injectors run on different platforms and have different test goals. In this paper, a software-implemented fault injector is designed to run on Windows NT platform. The aim is to test and evaluate reliability of software that will work in high-radiation environment. In such environment, SEU (single event upsets) is a main reason that causes failure of software. In our injector, SEU is emulated by flipping one-bit or multi-bits in memory or registers and memory injecting is the research focus. In order to perfect injecting theory and experimental technique, some related issues are studied in this paper. Based on memory space injecting technique, two injecting methods are designed and time complexity is computed respectively. Two kinds of injecting experiments are implemented according to these methods. Usually fault injecting experiment is time-consuming and ineffective, so it is important to choose an injecting plan that can optimize tests. The different distributions of location random variable X are used to implement experiments. Through comparative analysis of the experimental results, a conclusion is made that there is always an injecting plan that can significantly optimize experiments.
Soft Error Rate Analysis for Combinational Logic Using Frequency Method
Lei Shaohua, Han Yinhe, and Li Xiaowei
2011, 48(3):  535-544. 
Asbtract ( 571 )   HTML ( 0)   PDF (1466KB) ( 553 )  
Related Articles | Metrics
This paper introduces a SER (soft error rate) analysis algorithm using a frequency method which provides a fast and accurate technique to calculate the effects of electrical masking and latching window masking. The frequency method is an approach to analyze the propagation of transient fault using frequency features of each logical gate and signal. It is divided into two procedures, one is linear system procedure, the other is nonlinear system procedure. In linear system procedure, the frequency method calculates the output signal by the frequency response of electrical system. In nonlinear system, the transient faults magnitude is high enough, and transistor passes through different linear systems and causes the top of signal smoothed over. We calculate the output signal of the nonlinear system using input-output feature of electrical system. The functional relationship of the pulse-width of the output signal and the pulse-width of the particle-strike-caused glitch can be calculated by a data fitting method. In this paper, the frequency method and data fitting method are applied to model the probabilities of errors caused by a the particle strike. The experimental results with inverter chains show that the proposed technique achieves 96.96% accuracy and has proved to be time efficient compared with HSPICE simulation. At the same time, the experiments with ISCAS85 benchmark circuits and multiplier array also achieve 95.82% accuracy.