Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 April 2012, Volume 49 Issue 4
Paper
Measuring the Identifier Repetition and Aliasing Phenomena in K Networks
Liu Xiangtao, Li Wenfa, Duan Miyi, Zhao Xinyu, and Ji Tongkai
2012, 49(4):  679-690. 
Asbtract ( 452 )   HTML ( 3)   PDF (5273KB) ( 367 )  
Related Articles | Metrics
Kademlia, as a robust distributed Hash table (DHT) protocol, has been deployed by peer-to-peer (P2P) file sharing applications (e.g. BitTorrent and eMule) to facilitate the delivery of content. In this paper, these Kademlia-based networks deployed by BitTorrent and eMule are called K networks. It is essential for each peer in K networks to possess a unique IP address (or ID), on which both “peer lookup” and “resource searching” rely. However, it is noticed that a significant portion of peers have IP repetition and ID aliasing, through our analysis. In this paper, we propose a couple of metrics to deeply understand the distribution characteristics of IP repetition and ID aliasing. Based on these metrics, we carry out a series of measurement using the P2P crawler named Rainbow. We identify many interesting characteristics of IP repetition and ID aliasing in K networks, which could help promote P2P-network mining.
Measurement of Microblogging Network
Fan Pengyi, Wang Hui, Jiang Zhihong, and Li Pei
2012, 49(4):  691-699. 
Asbtract ( 564 )   HTML ( 3)   PDF (2369KB) ( 680 )  
Related Articles | Metrics
With the breakthrough of mobile communications and Web technology, online social network led by microblog has developed widely in China. More and more people begin to share information and opinions through microblogging system. In order to gain insights into the topological characteristics of microblogging network and online user behavior characteristics, we launch an active measurement of Sina microblog which is the biggest microblogging system in China. This paper analyzes the results from our measurement and investigates on topological characteristics and user behavior patterns in Sina microblog, compared with existing results via measuring other online social networks. Our major findings include: 1) Sina microblog has apparent smalll-world effect; 2) While the indegree of Sina microblog follows power-law distribution, the outdegree distribution appears to have multiple separate power-law regimes with different exponents; 3) Unlike other online social networks, Sina microblog has weak correlation of indegree and outdegree; 4) The overlay graph of Sina microblog appears assortative mixing; 5)Tweeting time of users exhibits daily and weekly patterns; 6) The number of tweets in Sina microblog approaches Weibull distribution; 7) Actions of retweeting and replying have strong correlation in Sina microblog, and the probability of retweeting is higher than that of replying. These research and findings will be helpful not only for designing mathematical or computational models which are coincident with actual microblogging characteristics in China, but also for monitoring,directing and dominating of microblogging system.
Cross-Regional Ferry Routing Design for Multiple Messengers in Opportunistic Networks
Jiang Haitao, Li Qianmu, Xu Jian, and Zhang Hong
2012, 49(4):  700-709. 
Asbtract ( 311 )   HTML ( 1)   PDF (2225KB) ( 355 )  
Related Articles | Metrics
Opportunistic networks are wireless networks where most of the time there does not exist a complete path between source and destination due to various reasons such as node mobility, wide deployment area, limited radio power, etc. The messages can be forwarded in asynchronous manners, which relies on the contacts between nodes and infrastructures. Ferry routing is an approach which utilizes a set of special mobile infrastructures called messenger to provide communication services for the nodes in opportunistic networks. According to the shortcoming of messengers scheduling and collaboration in traditional ferry routing, a cross-regional multiple messengers scheduling method is proposed in the paper. The network is divided into several horizontal and vertical regions, and each region contains a messenger who provides service for nodes. Messages are forwarded through a single messenger or the collaboration between a horizontal and a vertical messenger. Theoretic analysis of the proposed method at the expected delay is described. The improvement of delay and fault tolerance are also given. Simulation results show that the cross-regional ferry routing not only balances the network load and delay, but also has the capability of tolerancing single messenger fault, and it is a kind of reasonable and efficient multiple messengers scheduling method.
Research on IP Routing Cache Technologies
Zhu Guosheng, Yu Shaohua, and Xu Ning,
2012, 49(4):  710-716. 
Asbtract ( 516 )   HTML ( 7)   PDF (1372KB) ( 447 )  
Related Articles | Metrics
The current IP address cache or IP prefix cache technologies have limitations. We analyse the overlapping relationships among prefixes in the global routing tables and propose a threshold-based routing cache method which combines IP address cache and prefix cache technologies without the need of prefix expansion. The threshold value K is selected according to the prefix overlapping features of global routing tables, which overcomes the shortcomings of address caching and prefix caching technologies. Comparison and simulation show that our scheme has better performance over other schemes in cache size, cache hit ratio, fairness among prefixes and incremental prefix updates. For a global routing table with more than 260 000 entries and cache size of 30 000, above 97% prefix nodes could be 1∶1 cached by prefix cache and the other prefix nodes could be cached by address cache with threshold K=4. Computation results show that high-speed forwording could be fulfilled with small cache size.
A Network Immunization Strategy Based on Dynamic Preference Scan
Guo Chi, Wang Lina, Guan Yiping, and Zhang Xiaoying
2012, 49(4):  717-724. 
Asbtract ( 423 )   HTML ( 0)   PDF (2352KB) ( 370 )  
Related Articles | Metrics
There is a variety of self-organization phenomena emerging in complex networks. These phenomena bring enlightenment to the method of network vulnerability mining and the technology of network self-propelled immunization. A complete process of immunity resource deployment can be divided into four stages: information gathering, scanning, bug fixing and self-propulsion. The result of empirical analysis on the vulnerability distribution demonstrates that the distribution of vulnerable hosts obey the power law. It implies that blindfold scanning wastes many resources on invulnerable or inexistent hosts and a more effective immunization strategy should take advantage of this high non-uniformity of network vulnerability distribution. Good results can be achieved by static preference scan at the beginning of immunity resource spread. However, the effectiveness can not be persistent throughout the entire immunization process. On this basis, a novel network immunization self-propelled strategy is proposed, which is based on dynamic preference scan. This strategy can identify vulnerable hosts efficiently by a dynamic and adaptive preference scan method, and then fix and immunize these vulnerable hosts. This paper focuses on how to control this dynamic preference scan process. The analysis of modeling and computer simulation show that our strategy can restrain hazard spread efficiently and improve network security.
Modularity Modeling and Evaluation in Community Detecting of Complex Network Based on Information Entropy
Deng Xiaolong, Wang Bai, Wu Bin, and Yang Shengqi
2012, 49(4):  725-734. 
Asbtract ( 745 )   HTML ( 0)   PDF (1831KB) ( 675 )  
Related Articles | Metrics
Community structure is the most basic and important topologic characteristic of complex network and community detection method is therefore significant to its character statistics. A new theoretic model of modularity Q based on information entropy (IE) with low complexity and better accuracy is proposed to promote clustering accuracy. IE algorithm reaches better community detecting results than GN and FastGN algorithm on network with definite community structure and unidentified community structure, while computation complexity declines. The simulation results mentioned above show that IE will find more accurate community structure than traditional methods of edge rate on most classic complex network dataset. According to simulation result compared with the seven main classic community detecting methods on simulated random networks and real networks, information entropy based modularity is more accurate than traditional modularity of edge rate.
Novel and Efficient Method on Feature Selection and Data Classification
Chen Tieming, Ma Jixia, Samuel H.Huang, and Cai Jiamei
2012, 49(4):  735-745. 
Asbtract ( 524 )   HTML ( 0)   PDF (1507KB) ( 588 )  
Related Articles | Metrics
A novel feature selection method for data classification problems, as well as a quick rule extraction scheme, are proposed in this paper. At first, the Chi-Merge discretization method is improved by reducing the initial intervals. Using the improved method, the continuous attributes can be effectively discretized. After the attributes discretization, all contingency tables on variant feature patterns can be calculated quickly, and the inconsistency rate can also be generated for each contingency table. The key sequential of features can be identified by selecting the minimum inconsistency rate, and the optimized feature subset can also be achieved efficiently based on the sequence forward search strategy. At last, based on the data contingency table under the selected feature subset, the classification rules can be extracted with one-pass. The experiments show that the proposed data classification scheme obtains good performance. Furthermore, the proposed feature selection and rule extraction method can be extended for the classification applications on distributed isomorphic datasets. The proposed distributed classification method is also simple, efficient with high performance, as well as with privacy-preserving property for contents of sample data.
KMA-α:A Kernel Matrix Approximation Algorithm for Support Vector Machines
Ding Lizhong and Liao Shizhong
2012, 49(4):  746-753. 
Asbtract ( 771 )   HTML ( 0)   PDF (1164KB) ( 505 )  
Related Articles | Metrics
The computation of kernel matrices is essential for solving the support vector machines (SVM). Since the previous accurate approach is hard to apply in large-scale problems, there has been a lot deal of recent interests in the approximate approach, and a new approximation algorithm for the computation of kernel matrices is proposed in this paper. Firstly, we reformulate the quadratic optimization for SVM and multiple kernel SVM as a second-order cone programming (SOCP) through the convex quadratically constrained linear programming (QCLP). Then, we synthesize the Monte Carlo approximation and the incomplete Cholesky factorization, and present a new kernel matrix approximation algorithm KMA-α. KMA-α uses the Monte Carlo algorithm to randomly sample the kernel matrix. Rather than directly calculate the singular value decomposition of the sample matrix, KMA-α applies the incomplete Cholesky factorization with symmetric permutation to obtain the near-optimal low rank approximation of the sample matrix. The approximate matrix produced by KMA-α can be used in SOCP to improve the efficiency of SVM. Further, we analyze the computational complexity and prove the error bound theorem about the KMA-αalgorithm. Finally, by the comparative experiments on benchmark datasets, we verify the validity and the efficiency of KMA-α. Theoretical and experimental results show that KMA-α is a valid and efficient kernel matrix approximation algorithm.
On the Satisfiability and Consistency for CP-nets
Sun Xuejiao and Liu Jinglei
2012, 49(4):  754-762. 
Asbtract ( 509 )   HTML ( 0)   PDF (1131KB) ( 422 )  
Related Articles | Metrics
The conditional preference networks (CP-nets) is a simple and intuitive graphical tool for representing conditional ceteris paribus (all else being equal) preference statements over the values of a set of variables, which has become a research hotspot in artificial intelligence field recently. However, by now there are few works to address its characteristics of satisfiability and consistency, and there are neither strict definition of them, further discussion of their relationships, nor a general algorithm to get a satisfactory ranking. By analyzing the relationships between satisfiability and consistency, two algorithms are proposed to judge the satisfiability and to generate the satisfactory ranking for any structured binary-valued CP-nets. Firstly, by constructing an induced graph of CP-nets and studying its properties, the related theorems on satisfiability and consistency are obtained. Secondly, by analyzing the relations between them, a conclusion is drawn that the satisfiability for CP-nets is equivalent to its consistency. Therefore, the satisfactory ranking for any structured binary-valued CP-nets can be generated with the method of topology sort. What’s more, the syntax, semantics and some definitions of CP-nets are collated and introduced. All these studies can be seen as the improvement and refinement of Boutilier’s related works, and can deepen the basic theory research of CP-nets.
An Improved Algorithm of Multi-Robots Cooperative Online FastSLAM
Cai Yunfei, Tang Zhenmin, and Zhao Chunxia
2012, 49(4):  763-769. 
Asbtract ( 1072 )   HTML ( 3)   PDF (1627KB) ( 1248 )  
Related Articles | Metrics
An improved multi-robots cooperative online FastSLAM algorithm based on auxiliary measurements is proposed for the key problems of FastSLAM 1.0 and 2.0. In FastSLAM 1.0, the cumulated errors are presented for the uncertain localization of the robot itself without measurements. Although FastSLAM 2.0 takes the measurements into account, additional computational complexity is introduced. In the proposed algorithm, two robots are required. The leader robot is responsible for accomplishing the task of SLAM, and the auxiliary robot is responsible for providing the leader robot with static relative measurements. The method of cooperative auxiliary measurement provides the SLAM executor with more accurate localization information in FastSLAM 1.0 and avoids the additional computational complexity in FastSLAM 2.0. Experimental results show that the algorithm can achieve higher accuracy and feasibility. Moreover, the method is of high practical value.
Maximum Vector-Angular Margin Kernel Classification
Hu Wenjun, Wang Shitong, and Tao Jianwen
2012, 49(4):  770-776. 
Asbtract ( 578 )   HTML ( 1)   PDF (1296KB) ( 442 )  
Related Articles | Metrics
A maximum vector-angular margin classification mechanism, called MAMC, is proposed in this paper. Its core idea is to find a vector c in patterns feature space, which is as close as possible to the center of the training samples for the smaller VC dimension, such that all the data points can be classified in terms of the maximum vector-angular margin ρ between the vector c and all the training points. The proposed approach MAMC can not only be kernelized to enhance its flexibility, but also be simply realized by solving a corresponding convex optimization problem. Furthermore, its v×v\-1 parameter property is respectively the lower bound of support vectors and the upper bound of misclassified patterns, which is similar to the v-SVC algorithm. Meanwhile, the corresponding hard margin version can be equivalently formulated as a special and kernelized minimum enclosing ball (MEB), called the center constraint MEB (CC-MEB), thus the MAMC may be extended to another version for training on large datasets by using generalized core vector machine (CVM). Experimental results about artificial and real datasets illustrate that the obtained effectiveness of the proposed method is competitive.
Local Statistical Analysis of Gabor Coefficients and Adaptive Feature Extraction for Face Description and Recognition
Li Kuan, Yin Jianping, Li Yong, and Liu Fayao
2012, 49(4):  777-784. 
Asbtract ( 514 )   HTML ( 0)   PDF (2361KB) ( 415 )  
Related Articles | Metrics
Many researches have shown that Gabor feature is robust to illumination and expression variations and has been successfully used in face recognition. However, Gabor-based methods suffer from the high dimensional problem, also known as the curse of dimensionality. A novel method for face description and recognition based on Gabor filters is proposed in this paper. The normalized face image is first decomposed by convolving with multi-scale and multi-orientation Gabor filters, and then separated into several blocks. Through statistical techniques, including mean and variance of Gabor coefficients inside each block, the block feature vector (BFV) can be obtained. The face feature vector (FFV) of the whole image is then constructed by conjugating the BFVs in row column order. Thus FFV can be used to describe the face, and the high dimensional problem is effectively solved. In the classification stage, a robust face recognition system which performs multi-class classification is built based on several two-class classifiers using voting mechanism. For each two-class classifier, a feature extraction module adaptively selects the most important features. Therefore, only the most distinguishable features in FFV are picked out, called best subset feature vector (BSFV). The results compared with the published results on Yale face database verify the validity of the proposed method.
Node Probability Query Algorithm in Probabilistic XML Document Tree
Wang Jianwei, and Hao Zhongxiao,
2012, 49(4):  785-794. 
Asbtract ( 459 )   HTML ( 0)   PDF (1999KB) ( 359 )  
Related Articles | Metrics
Because probabilistic XML document is the network data exchange and representation standard of probabilistic data, query and computation method of the element or elements and the probability is a main research content. Probabilistic XML document tree is an effective data model of probabilistic XML document, so this paper presents two kinds of query algorithms based on the definitions of basic path expression and extended path expression. One is that the probabilistic XML tree is firstly decomposed into the set of ordinary XML subtrees based on possible world principle and then the probability query algorithm of a single node or some nodes is designed. The other is that the probabilistic XML tree is firstly decomposed into the set of probabilistic XML subtrees based on the path analysis principle and then the probability query algorithm is also designed. The two algorithms are compared and analyzed through the experiments and the results show that they are effective. Compared with the first method, the second one fits the larger probabilistic XML document tree, and the quering and computeation of node probability are easier.
A Weighted-Equal-Area Parallel Data Partitioning Algorithm for Global Numerical Weather Prediction Models
Lu Fengshun, Song Junqiang, Zhang Lilun, Zhang Weimin, Ren Kaijun, and Zhu Xiaoqian,
2012, 49(4):  795-803. 
Asbtract ( 741 )   HTML ( 0)   PDF (2046KB) ( 470 )  
Related Articles | Metrics
The computation on the polar regions plays an crucial role in the design of global numerical weather prediction (NWP) models, which details itself in the following two aspects: the particular treatment of polar regions in the models dynamic framework and the load-balancing problem caused by the parallel data partitioning strategies. The latter has become the bottleneck of massive parallelization of NWP models. To address this problem, a novel spherical data partitioning algorithm based on the weighted-equal-area approach is proposed. The weight describes the computational distribution across the entire sphere. The new algorithm takes the collar amount and the weight function as its parameters and performs the spherical partitioning as follows: the north and the south polar regions are partitioned into a singular subdomain; then the remaining sphere surface is partitioned into some collars along the latitude; and finally each collar is partitioned into subdomains along the longitude. This partitioning method can result in two polar caps plus a number of collars with increasing partition counts as we approach the equator. After a theoretical analysis of the quality relevant to the partition performed by the algorithm, we take the PSTSWM, which is a spectral shallow water model based on the spherical harmonic transform technique, as our test-bed to validate our method. The preliminary results indicate that the algorithm can result in good parallel load balance and a promising prospect can be expected for its application within the global atmospheric model of GRAPES.
An Improved Parameterized Algorithm for Hyperplane-Cover Problem
Li Wenjun, Wang Jianxin, and Chen Jianer
2012, 49(4):  804-811. 
Asbtract ( 408 )   HTML ( 0)   PDF (841KB) ( 370 )  
Related Articles | Metrics
Hyperplane-cover problem is a fundamental NP-hard problem in computational geometry, which has many applications in practice. For the computational hardness of the NP-hard problems, some traditional approaches have been proposed for solving these NP-hard problems. But each of them has its own limitations, and none of them can satisfy all the application requirements in practice. Recently, a new approach dealing with NP-hard problems, called parameterized computation, has been developed, which has been effectively used in solving many hard problems. In this paper, based on the further structure analysis of the line-cover problem(a special case of hyperplane-cover problem), a deterministic parameterized algorithm with running time O(k\+3(0.736k)\+k+n log k) is proposed for the problem using depth-bounded search tree method, which significantly improves the previous best result O((k2.2)\+{2k}+n log k). The improvement is due to taking the advantages of the relationship between points and lines, and due to the precise algorithms running time analysis. Moreover, based on the generalization of the algorithm solving the line-cover problem in higher space, a deterministic parameterized algorithm for the hyperplane-cover problem with running time O(dk\+{d+1}(dk)!/((d!)\+kk!)+n\+{d+1}) is given, which greatly improves the previous best result O(k\+{d(k+1)}+n\+{d+1}). In particular, the algorithms proposed can be used to solve many other covering problems, such as covering points with spheres, covering points with polynomials, covering by sets with intersection at most d, etc.
A Method of RNA Secondary Structure Prediction Based on Hidden Markov Model
Dong Hao, Liu Yuanning, Zhang Hao, and Wang Gang
2012, 49(4):  812-817. 
Asbtract ( 507 )   HTML ( 0)   PDF (1477KB) ( 461 )  
Related Articles | Metrics
The effective prediction of RNA secondary structure is an important research field of bioinformatics. We propose a new method based on hidden Markov model to predict the RNA secondary structure. We appliy the matching algorithm of prefix and suffix to find all the possible (including the pseudo-knot) stem zones quickly, establish the RNA-HMM, find the optimal method of the combination of stem zones, and obtain the RNA secondary structure including the pseudo-knot. The experiment results show that this method can reduce the computational complexity and improve the specificity and sensitivity of prediction with high accuracy, and can also predict the pseudo-knot structure.
Parallel Task Building in Multicore System
Wang Bo, Shang Shifeng, Wu Yongwei, and Zheng Weimin
2012, 49(4):  818-825. 
Asbtract ( 382 )   HTML ( 3)   PDF (2616KB) ( 338 )  
Related Articles | Metrics
In multicore system, system execution efficiency presently has gottn increasing concerns. Generally, a whole system includes several modules and some optimization work has been done on these modules. Given an integrated system including Tomcat, Httpd and Lucene, each of them has processed some optimization to reach the favorable performance. However, when they constitute an integrated system, the system can not have good performance. Based on the deep research for the characteristic of each subtask in the system, several parallel ways are presented to improve the whole execution efficiency. The proposed methods involve: 1) Cancelling the lock of shared object or files; 2) Rearranging subtask; 3) Removing the system call from the multi-thread operation. Experimental results show that the whole performance gets improved and each function the subtask focuses on is more distinct.
Generic GUI Generator Based on XML and XSD
Jiang Jinsong, Yan Kun, Ni Guiqiang, He Ming, and Yang Bo
2012, 49(4):  826-832. 
Asbtract ( 701 )   HTML ( 3)   PDF (2617KB) ( 405 )  
Related Articles | Metrics
In order to satisfy the demand of fast change of GUI (graph user interface) raised by application software, a GUI generator is designed based on XML and XSD (XML schema description language). It is supported by paring and persistent algorithms based on the idea of depth recursion and width recursion. The generator is composed of a GUI designer and a GUI parser. Semantics of both layout and data model are supported, including the hierarchy style and data models of group, union and enumerator. Finally, an example of router of some network intrusion detecting system is shown, in which GUI can be generated in Java and C# languages, and the semantic validity of input data can be checked also.
Study on R-Calculus
Wu Jiasen and Song Fangmin
2012, 49(4):  833-838. 
Asbtract ( 432 )   HTML ( 1)   PDF (678KB) ( 339 )  
Related Articles | Metrics
The R-calculus, proposed by professor Li Wei, is a revision calculus for formal theories. It plays a key role in OPEN and GUINA logic. It can be considered as a tool to cut out the prerequisites of contradiction when a theory conflicts with some facts, hence resulting in a consistent sub-theory. In order to give a purely syntactic calculus, we make a deep exploration of the concept “necessary prerequisite”, and characterize it in three different ways. Furthermore, R-terminated sets and the maximally consistent subsets are also studied. The first way is a logical consequence of original R-calculus. And the second one is based on maximally consistent subset. The third one is an inductive formalization of R-prerequisite. By comparing these ways, we claim that each way has its own advantages and disadvantages. We chose the last one as our approach to derive a sound and relatively complete calculus, namely R′, which could be a better system for original purpose. The lower bound and upper bound of R-terminated sets are proposed. And we prove that deriving maximally consistent subsets of contradiction is not only non-recursive but even not recursive enumerable, which indicates that any purely syntactic system are not able to be both sound and complete.
A Method of Buffer Overflow Detection Based on Static Code Analysis
Wang Yawen, Yao Xinhong, Gong Yunzhan, and Yang Zhaohong
2012, 49(4):  839-845. 
Asbtract ( 696 )   HTML ( 7)   PDF (1291KB) ( 607 )  
Related Articles | Metrics
With the Internet advances further, people pay more and more attention to information security. Particularly, buffer overflow has become one of the best-known software security vulnerabilities. In terms of source code, software security vulnerabilities can be caused in two ways, data-copy-related and format-control-string-related function calls. This paper summarizes the common functions which are prone to risk buffer overflows, and introduces an algorithm of how to compute the length of formatted string variables when calling the formatted input/output functions. It also proposes a method of buffer overflow detection based on static code analysis. The detection method models the source code firstly by creating its abstract syntax tree, symbol table, control flow graph and function call graph. Based on these models, the value range of variables and expressions in each program point is computed by interval calculation, and when encountering a function call, the function’s summary is applied as a stand-in for the function. This method is scalable by allowing user to add functions under test in configure files. Experiments on open source project show that it would detect buffer overflow efficiently, and its output has both a lower false positive rate and a lower false negative rate than another testing tool, Klocwork K8.
Automatic Image Annotation Based on Relevant Visual Keywords
Ke Xiao, Li Shaozi, and Cao Donglin
2012, 49(4):  846-855. 
Asbtract ( 593 )   HTML ( 2)   PDF (2476KB) ( 350 )  
Related Articles | Metrics
Automatic image annotation is a significant and challenging problem in pattern recognition and computer vision areas. At present, existing models can not describe the visual representations of corresponding keywords, which would lead to a great number of irrelevant annotations in final annotation results. These annotation words are not related to any part of images in visual contents. A new automatic image annotation model (VKRAM) based on relevant visual keywords is proposed to overcome the above problems. Our model divides each keyword into two categories: abstract word or non-abstract word. Firstly, we establish visual keyword seeds of each non-abstract word, and then a new method is proposed to extract visual keyword collections by using corresponding visual seeds. Secondly, according to the traits of abstract words, an algorithm based on subtraction regions is proposed to extract visual keyword seeds and corresponding collections of each abstract word. Thirdly, we propose an adaptive parameters method and a fast solution algorithm to determine the similarity thresholds of each keyword. Finally, the combinations of the above methods are used to improve annotation performance. Experimental results conducted on Corel 5K datasets verify the effectiveness of the proposed annotation image model and it has improved the annotation results on most evaluation methods.
Reversible Factorization of U Orthogonal Transform and Image Lossless Coding
Xiong Gangqiang, Yu Jiande, Xiong Changzhen, and Qi Dongxu,
2012, 49(4):  856-863. 
Asbtract ( 460 )   HTML ( 2)   PDF (819KB) ( 401 )  
Related Articles | Metrics
U orthogonal transform is applied into the image lossless coding, and the factorizations of U orthogonal matrices into triangular elementary reversible matrices (TERMs) and single-row elementary reversible matrices (SERMs) are investigated. The TERM factorization of an N by N matrix is determined by N-1 free variables, and therefore, the local approximate optimal TERM factorization can be found by shrinking search-interval of the N-1 free variables. If row exchange is used, an 8×8 orthogonal matrix has only 40320 forms of SERM factorizations, and the approximate optimal SERM factorization can be found with the exhaustion search algorithm. At the end, image lossless coding is achieved by using reversible U matrices, and the experimental results show that the code-rate of lossless compression based on reversible U transform is comparable to that of near lossless compression based on float U orthogonal transform; the coding efficiency of SERM factorization outperforms that of TERM; the image coding performance of U orthogonal transform of degree 3 is approximate to that of DCT. As a result, the U orthogonal transformation of degree 3 can be used into the image lossless coding instead of DCT.
A Low-Power and Low-Cost BIST Scheme Based on Capture in Turn of Sub-Scan Chains
Wang Weizheng, Kuang Jishun, You Zhiqiang, and Liu Peng
2012, 49(4):  864-872. 
Asbtract ( 396 )   HTML ( 0)   PDF (1954KB) ( 409 )  
Related Articles | Metrics
Scan-based pseudo-random BIST is currently one of the most popular structured design-for-testability (DFT) methodologies in very large scale integrated circuit test. However, excessive power dissipation and prohibitively long test time are two serious issues in this testing approach. Recently, various techniques have been proposed to address one of these issues. But few of them can deal with the two issues simultaneously. In this paper, a new scan-based BIST scheme, namely BIST scheme based on capture in turn of sub-scan chains (BCIT), is proposed. In this scheme, each scan chain is divided into N(N>1) sub-chains. During test, using scan chain disabling technique, all sub-chains in a scan chain are active in turn in both scan shift and capture cycles, i.e. only one sub-chain is active at a time. Thus, the switching activities in the scan cells can be limited to a low level. To detect random pattern resistant faults, an algorithm of LFSR seed generation, which is compatible with the proposed test scheme, is presented as well. Experimental results on ISCAS’89 benchmark circuits show that compared with the conventional BIST scheme, the proposed strategy can achieve not only about (N-1)/N reductions of average and peak power, but also significant reduction of test application time and seed storage for LFSR reseeding.
Test Compression Approach of Adopting Cyclic Shift and Optimal Coding
Liu Jie,, Liang Huaguo, and Jiang Cuiyun
2012, 49(4):  873-879. 
Asbtract ( 362 )   HTML ( 0)   PDF (1146KB) ( 360 )  
Related Articles | Metrics
It is more difficult to accept the increasing test cost for ICs, and hence a simple and high effective solution scheme is proposed in this paper. Cyclic shift technique is applied to test data compression, which can make use of don’t bits in test set more effectively than general shift techniques. Combining with XOR logic, the proposed scheme cumulates don’t bits and further increase compatibility and inverse compatibility between test vector and its reference vector. According to the statistics of shift state possible occurring, Huffman tree is built and the most optimal code form is found in coding process, so that the utilization ratios of short code words are increased and the use frequencies of long codes are decreased. The presented analysis and experiment results show that the proposed scheme can increase test data compression ratios and decrease test time with very low additional hardware overhead, and is superior to other existing runlength code schemes and similar blocking code ones.
Two Methods for Reducing the Area Overheads of Self-Feedback Testing
Kuang Jishun, Jin Liyun, Wang Weizheng, and You Zhiqiang
2012, 49(4):  880-886. 
Asbtract ( 360 )   HTML ( 0)   PDF (1290KB) ( 365 )  
Related Articles | Metrics
A circuit-under-test can be regarded as a kind of available resource, not only a tested object, in self-feedback testing. By feedback connecting some of the interior nodes of a circuit-under-test to its input terminals, the circuit can generate and apply a test set serially after an initial vector being applied from the outside of the circuit. In this paper, the old grouping method and the feedback nodes assignments are improved. A depth priority algorithm with information matrixes for a common path in a number of directed graphs is presented. The experimental results on ISCAS85 benchmark circuits and MinTest test sets demonstrate that the proposed algorithm can reduce the extra area and improve the fault efficiency.
An Efficient Hierarchical Object Placement Algorithm for Object Storage Systems
Chen Tao, Xiao Nong, and Liu Fang
2012, 49(4):  887-899. 
Asbtract ( 545 )   HTML ( 1)   PDF (2259KB) ( 342 )  
Related Articles | Metrics
With the prevalence of object-based storage systems, one of the big challenges in such systems is how to design an effective object placement algorithm which can locate object in constant time, distribute data evenly among object-based devices and adapt well to the changes of devices. A majority of proposed approaches are appropriate for single level mode, while the multi-level approaches cannot locate object in constant time and have bad adaptability. This paper presents a novel hierarchy object placement algorithm to distribute several petabytes of objects among tens or hundreds of thousands of devices. Specially, it uses Max-Min algorithm to classify the devices into some classes for different devices configuration. Then, we propose EFAH hashing algorithm to assign data between classes and within a class. The theoretical analysis and experimental study show that this new hierarchy object placement can locate data in constant time to reduce the computation overhead of metadata server and avoid the performance bottleneck. Moreover, it can distribute objects evenly among devices to balance I/O load. In the event of devices changes, our approach can redistribute fewer objects to preserve even distribution so that the performance of systems would not be affected in the process of rebalancing I/O load.