Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 June 2009, Volume 46 Issue 6
Paper
A Bayesian Network-Based Search Method in Unstructured Peer-to-Peer Networks
Qian Ning, Wu Guoxin, and Zhao Shenghui
2009, 46(6):  889-897. 
Asbtract ( 345 )   HTML ( 1)   PDF (1505KB) ( 540 )  
Related Articles | Metrics
Object discovery is a fundamental service and a critical issue in peer-to-peer (P2P) networks, which has a great impact on the performance and scalability of P2P networks. Many search methods have been proposed for unstructured peer-to-peer networks during the past few years, but complicated organization, high search cost and maintenance overhead make them less practicable. To avoid these weaknesses, the authors propose an adaptive and efficient method for search in unstructured P2P networks, the Bayesian network-based search method (BNS), which retains the simplicity, robustness and fully decentralized nature like Gnutella. This approach assumes that if a peer has a particular resource and this resource has some relation to a resource that one is interested in, it is very likely that this peer has the resource that one is interested in as well. This scheme utilizes feedback from previous searches, including not only the feedback of an interested object itself but also those objects which have some semantic relations to the interested object. It applies Bayesian network to establish an inference model, infers probabilities based on this model, and probabilistically directs future searches. Experimental results show that the BNS method is effective, achieving high success rate and more discovered objects, low bandwidth consumption and less maintenance, and a good adaptation to changing network topologies.
WPathload: A Modified Available Bandwidth Measurement Algorithm
Zeng Bin, Zhang Dafang, Li Wenwei , and Xie Gaogang
2009, 46(6):  898-904. 
Asbtract ( 488 )   HTML ( 2)   PDF (793KB) ( 472 )  
Related Articles | Metrics
Available bandwidth, as one of the most crucial network resources, directly influences the user perceived performance, and its accurate measurement and estimation is an essential problem in traffic engineering, network monitoring and design of transport protocols. Several tools have, consequently, been proposed to measure end-to-end available bandwidth. Among these tools, Pathload is one of the important methods for measuring available bandwidth. Unfortunately it still has some problems, e.g. the long convergence time and large probe traffic. To solve the problems, WPahtload, a modified available bandwidth measurement algorithm based on delay jitter trend is proposed. In WPahtload, the delay jitter is designed, which can indicate the relationship between the probing rate and the end-to-end available bandwidth. By calculating these parameters, the end system can adjust the transmission rate quickly. Furthermore, the fleet sending rate is used to replace by arrival rate, and then refresh the upper limit of available bandwidth, so that the available bandwidth can be estimated quickly as well as not resulting in large influence on existing traffic. The simulation experiments show that the proposed algorithm WPathload could measure end-to-end path available bandwidth with less overhead and faster convergence rate than that of Pathload. Furthermore, the proposed algorithm tends to be capable of rapidly reflecting changes of bandwidth, thus improving the capability of tracing bandwidth changes.
ENCBP—An Extended Multi-OSs Remote-Booting Method
Wei Li, Xu Guangbin, Zhang Yaoxue, Xia Nan, and Kuang Wenyuan
2009, 46(6):  905-912. 
Asbtract ( 486 )   HTML ( 0)   PDF (802KB) ( 439 )  
Related Articles | Metrics
With the advance of hardware, software and network, computing paradigm is shifting from mainframe towards pervasive computing. In pervasive computing, users could transparently use any computing service through pervasive infrastructure: even operating systems itself can be acquired through pervasive networks as a service. One implementation method to realize that vision is to remote-booting operating systems over networks, in which an efficient, highly secure and reliable remote-booting method is a key element. Traditional remote-booting methods such as RPL and PXE are not suitable for the network computing for their incapability to support multi-OSs or poor security. NCBP(network-based client boot protocol) is able to support multi-OS, however it is inefficient in terms of booting mono-kernel operating systems such as Linux/Unix and is vulnerable to mishaving such as spoofing. The authors propose ENCBP—an extended NCBP for remote booting multi-OSs. Without compromising the functionality of NCBP, ENCBP improves the remote-booting process by differentiating the kernel types of loaded operating systems. ENCBP also enhances further the security of the remote-booting process by introducing SSFTP—a new and safe file transmission protocol, and relieves the necessity for the optional ROM specified by NCBP. The testing results show the feasibility and effectiveness of the proposed method.
A Method for Modeling and Test Selection of Interoperability
Li Hua, Ye Xinming, Wu Chengyong, Wang Long, and Wang Lingling
2009, 46(6):  913-919. 
Asbtract ( 397 )   HTML ( 0)   PDF (872KB) ( 433 )  
Related Articles | Metrics
Interoperability testing is a basic guarantee for interconnecting network devices and transporting the information between these devices. Interoperability testing involves at least two IUTs(implementation under test), and sometimes one of them is called QE(qualified equipment) which plays a measurement role during the testing process and can be used to verdict the other IUT’s interoperability capability. Firstly,the content of interoperability testing of a protocol is introduced and the relation between interoperability testing and conformance testing is given. Secondly, according to the nondeterministic finite state machine of specification and the current experience of interoperability testing, a probability nondeterministic finite state machine(PNFSM) is constructed gradually. The achieved model efficiently describes the current situation of interoperability testing. Thirdly, based on PNFSM and a certain policy, a binary tree which covers all states of PNFSM is generated. The algorithms which are used to select test sequences from the binary tree are presented. The algorithms’ validity is proved through an example and the final test sequences are listed in a table. Furthermore, the testing scenario of counting to infinite in RIP is used as an experiment to simply illuminate the modeling method. Finally, the conclusion and the research work in the future are introduced.
Design and Implementation of a Distributed Multi-Point Concurrent Test System
Luo Hao and Zeng Huaxin
2009, 46(6):  920-926. 
Asbtract ( 631 )   HTML ( 2)   PDF (877KB) ( 554 )  
Related Articles | Metrics
As the rapid expansion of Internet, IP routers are becoming more and more important and therefore must be exhaustively tested before being put into practice. However, the test methodology and test systems for IP routers hardly stay in touch with the state of the art. Based on the deficiency of existing stand-alone test systems for IP routers, the necessity of constructing distributed test systems is analyzed on several aspects and a new test system called distributed multi-point concurrent test system (DMC-TS) is introduced. DMC-TS is composed of one distributed test manager and several test agents. It has excellent scalability and extensibility and thus can simulate the real network circumstances for the routers under test. Key issues on the implementation of DMC-TS are discussed in detail, including the implementation of the distributed test manager and the test agents as well as the synchronization problem which is very common in distributed test environment. As a result, two types of synchronization problems are pointed out with corresponding solutions. On the basis of solving the issues stated above, the proposed system can be used to conduct conformance testing, performance testing and interoperability testing for IP routers. Some experiments are carried out to illustrate the feasibility and practicability of the proposed test system.
Internet Routing Correlation Analysis and Monitoring System Design
Liang Wei, and Bi Jingping
2009, 46(6):  927-933. 
Asbtract ( 460 )   HTML ( 2)   PDF (1126KB) ( 621 )  
Related Articles | Metrics
Internet has evolved from an educational network into a public business and information infrastructure. The monitoring and analysis of the routing plane, which lies in the heart of Internet, has significant theoretic and practical implications. Internet routing architecture is generally partitioned into intra-domain routing and inter-domain routing. Because inter-domain and intra-domain routing protocols are often interdependent, problems with one protocol can affect the performance of the other. Much work has been done with Internet routing monitoring and analysis, and it is helpful for operators to recover their networks from routing disasters. However, most of previous work is around a single routing protocol, which performs poorly in providing comprehensive routing view and protocol interaction analysis. Motivated by this, the intra-domain and inter-domain routing correlation analysis technique is explored in this paper. The authors first propose an Internet routing topology model (IRTM), then present a key monitoring algorithm, and finally successfully implement Internet routing monitoring system (IRMS). IRMS, which is based on OSPF and BGP monitoring, provides correlated routing analyses including full-scale routing topology construction and protocol interaction analysis. IRMS has been applied to routing monitoring in a regional ISP network for one month, and the results show that IRMS provides facilities for network operators going deep into routing monitoring.
A Node-Level Congestion Avoidance Algorithm in Wireless Sensor Networks
Sun Guodong, Liao Minghong, and Qiu Shuo
2009, 46(6):  934-939. 
Asbtract ( 472 )   HTML ( 0)   PDF (900KB) ( 507 )  
Related Articles | Metrics
Network congestion happens if the source traffic load exceeds the maximal transport capacity at any point in a network. For wireless sensor networks, the node-level congestion leads to a large amount of packet drop, causes the transport capacity to degrade, and increases the network latency. Particularly, more packet retransmissions under network congestion waste the limited energy of network nodes, and shorten the network system lifetime. However, the end-to-end congestion control in wired networks is not appropriate to wireless sensor networks, due to the radio channel and the traffic pattern in wireless sensor networks. In this paper, a node-level congestion avoidance algorithm for wireless sensor networks is proposed. The proposed algorithm consists of two parts, one is the sending window based congestion avoidance and the other is the priority based packet sending strategy. Under the proposed algorithm, every sensor node assigns a sending window for each upstream node by some strategies, and the upstream node which obtains available sending window sends the packet with the highest priority in order to improve the network performances, such as fairness and latency. The simulation results show that the proposed algorithm is energy-efficient, not only reducing the packet drop rate over networks, but also improving effectively the network transport fairness and the average network latency.
Quantitative Evaluation of the Cryptographic Block’s Resistibility to Power Analysis Attack at Different Design Level
Tong Yuanman, Wang Zhiying, Dai Kui, and Lu Hongyi
2009, 46(6):  940-947. 
Asbtract ( 393 )   HTML ( 0)   PDF (842KB) ( 455 )  
Related Articles | Metrics
In the design and implementation of cost effective and power-analysis-resistant secure chip, it is necessary to perform quantitative analysis of the cryptographic block’s ability to prevent power analysis attack. The key of quantitative analysis is to evaluate the resistibility to power analysis attack and simulate the instantaneous power trace. The number of power samples required to perform power analysis attack successfully is used to characterize the resistibility. The number of samples is computed based on the signal-to-noise ratio of the corresponding power analysis attack. In order to compute the number of power samples, it is necessary to simulate the instantaneous power trace of cryptographic blocks. The instantaneous power trace is expressed as a discrete time sequence of instantaneous current, not the average power consumption or the peak power consumption. A method is presented to simulate the cryptographic block’s instantaneous power trace through the design cycle including RTL(register transfer level) design, synthesis and place & route. Two kinds of speedup methods which are the time reduction at the cost of space and the multi-thread parallel simulation are proposed. So that the simulation can be speeded up, and also be applicable to power trace simulation of large scale circuits.
A Dynamic Access Control Model for Inter-Operation in Multi-Domain Environment Based on Risk
Tang Zhuo, Zhao Lin, Li Kenli, and Li Ruixuan
2009, 46(6):  948-955. 
Asbtract ( 504 )   HTML ( 5)   PDF (788KB) ( 484 )  
Related Articles | Metrics
The rapid development of Internet and related technologies has created tremendous possibilities for the interoperability between applications in open and heterogeneous distributed environment. Interoperability provides a means for distributed applications to share resources and services, which improves performance and resource utilization. Access control is a crucial security technology. It can control the legal users to sensitive resources effectively and ensure users to access relative resource. For the complexity of the multi-domain environment and the ceaseless evolvement of the information secure share, the traditional access control method can not ensure the absolute security for the exchange of data resource. Traditional access control model can not satisfy the requirement of the dynamic of the multi-domain environment. Through introducing the concept of risk, the authors propose a dynamic access control model for multi-domain environment based on the risk of inter-operations. The risk grade of an access policy can be calculated by the history of the inter-operations among domains; the security degree of the objects and the safety factor of the access events. Through adjusting the access policies which have high risk grade, the risk in the system can be controlled in real time. The analysis of the security theory shows that this method can reinforce the facility of the access control and the security of the multi-domain environment.
On the Relativity of Binary Derivation and Autocorrelation Randomness Test
Fan Limin, Feng Dengguo, and Chen Hua
2009, 46(6):  956-961. 
Asbtract ( 484 )   HTML ( 0)   PDF (534KB) ( 582 )  
Related Articles | Metrics
Randomness test plays an important role in the application of cryptography. There exists a lot of randomness test now, but it is impossible to choose all of them in practical test. It is significant to study the relativities among these test items. In this paper, the relativity of two important randomness tests, which are autocorrelation test and binary derivation test, is studied. The test procedures of these two randomness tests are analyzed based on their statistical theory. And a conclusion of the study is that binary derivation test equals to autocorrelation test while the parameter k equals 2\+t according to an attribute of Yang Hui triangle. So it is redundant to implement these two randomness tests for the same sequence synchronously. This conclusion is also proved through experimentation. Otherwise, another conclusion of this paper is that each bit of the derivation sequence is relevant to all the correlative bits of initial sequence when the test parameter k equals to 2\+t-1 in binary derivation test. The work of this paper is helpful to select reasonable and scientific parameters in practical randomness test.
Research and Design of Reconfigurable Computing Targeted at Block Cipher Processing
Yang Xiaohui, Dai Zibin, and Zhang Yongfu
2009, 46(6):  962-967. 
Asbtract ( 561 )   HTML ( 0)   PDF (613KB) ( 624 )  
Related Articles | Metrics
With the development of the information technology and network communication, the increased security demands should be satisfied in the network communication applications. Reconfigurable computing is a novel computing system which can combine the reconfigurable hardware processing unit and the software programmable processor. The design of a cipher processing system adopts reconfigurable computing technology, which can support multiple cryptographic algorithms in the cipher application. Therefore, it can achieve crypto algorithms processing with efficiency and flexibility, and it also solves the hidden trouble in the cipher processing system. Based on the analysis of processing structure characteristics of popular block cipher algorithms, the authors propose a reconfigurable cipher processing architecture (RCPA) using the design method of reconfigurable processing architecture. And a prototype is implemented based on RCPA. The prototype is realized using Altera’s FPGA. The logic synthesis, placement and routing of RCPA are accomplished using 018μm CMOS technology. The experiment results indicate that on the prototype based on RCPA, the performance of many block ciphers is 10~20 times higher than on highperformance general purpose processor and 1.1~5.1 times higher than on other specialized reconfigurable frameworks. The results prove that RCPA can guarantee high flexibility for most of the block cipher algorithms and can achieve relatively high performance.
A Fast Ant Colony Optimization Algorithm for Traveling Salesman Problems
Ji Junzhong, Huang Zhen, and Liu Chunnian
2009, 46(6):  968-978. 
Asbtract ( 690 )   HTML ( 3)   PDF (1682KB) ( 607 )  
Related Articles | Metrics
Ant colony optimization (ACO) is a population-based metaheuristic technique to solve combination optimization problems effectively, such as traveling salesman problem (TSP), multidimensional knapsack problem (MKP), and so on. However, how to improve the performance of ACO algorithms is still an active research topic. Though there are many algorithms solving TSPs effectively, there is an application bottleneck that the ACO algorithm costs too much time in order to get an optimal solution. Combining the pheromone updating with an improvement of the stochastic search strategy, a fast ACO algorithm for solving TSPs is presented in this paper. Firstly, a new pheromone increment model called ant constant, which keeps energy conversation of ants, is introduced to embody the pheromone difference of different candidate paths. Meanwhile, a pheromone diffusion model, which is based on info fountain of a path, is established to reflect the strength field of the pheromone diffusion faithfully, and it strengthens the collaboration among ants. Finally, a simple mutation strategy with lower computational complexity is adopted to optimize the evolution result. Experimental results on different benchmark data sets show that the proposed algorithm can not only get better optimal solutions but also enhance greatly the convergence speed.
Computing Most Specific Concept in Description Logic with Transitive Roles and Existential Restrictions
Jiang Yuncheng, Tang Suqin, Wang Ju, and Zhou Shengming
2009, 46(6):  979-987. 
Asbtract ( 472 )   HTML ( 0)   PDF (679KB) ( 499 )  
Related Articles | Metrics
Description logic is a logical reconstruction of the frame-based knowledge representation languages, with the aim of providing simple well-established declarative semantics to capture the meaning of structured representation of knowledge. The fundamentality of non-standard inferences in description logic, especially the current research progress and existing problems of the MSC (most specific concept) inference in description logic, are analyzed in this paper. Aiming at the insufficiency of the MSC inference in description logics which can not deal with transitive roles and existential restrictions, the MSC inference for description logic with transitive roles and existential restrictions εL\++ is studied. A kind of new εL\++-description graph is presented. The inference algorithm of approximating MSC in description logic with transitive roles and existential restrictions εL\++ is presented using description tree and description graph, and its correctness is proved using εL\++-description trees homomorphism and description graph homomorphism. As a by-product, the instance reasoning algorithm in description logic with transitive roles and existential restrictions εL\++ is presented using εL\++-description tree and description graph homomorphism, and its correctness is also proved. Theoretical foundation for the MSC inference for more expressive description logics such as ALε is provided through the MSC inference algorithm of εL\++.
Fast Relaxed Algorithms of Maximum Variance Unfolding
Wang Qinggang and Li Jianwei,
2009, 46(6):  988-994. 
Asbtract ( 1354 )   HTML ( 1)   PDF (1370KB) ( 510 )  
Related Articles | Metrics
Nonlinear dimensionality reduction is a challenging problem encountered in a variety of high dimensional data analysis, including machine learning, pattern recognition, scientific visualization, and neural computation. Based on the notion of local isometry, maximum variance unfolding (MVU) is proposed recently for nonlinear dimensionality reduction. By pulling the input patterns as far apart as possible subject to the strict local distance-preserving constraints, MVU can learn the faithful low dimensional structure of the manifold embedded in the high dimensional space. However, the final optimization of MVU needs to solve a semidefinite programming (SDP) problem; the huge computational and memory complexity makes MVU infeasible for large-scale implementations. To solve the problem, a fast algorithm of MVU, called relaxed MVU (RMVU), is proposed. In RMVU, originated from the approximate local structure preservation idea of Laplacian eigenmaps, the strict local distance-preserving constraints of MVU are relaxed. The optimization finally becomes a generalized eigen-decomposition problem and the computational intensity is significantly reduced. For addressing the more large-scale data sets, an improved RMVU, called landmark-based relaxed MVU (LRMVU), is also proposed. The theoretical analysis and experiments on the artificial data set and actual images show that the proposed algorithms RMVU and LRMVU can largely reduce the computational and memory complexity of MVU.
A Δ-Tree Based Similarity Join Processing for High-Dimensional Data
Liu Yan, and Hao Zhongxiao,
2009, 46(6):  995-1002. 
Asbtract ( 486 )   HTML ( 0)   PDF (672KB) ( 422 )  
Related Articles | Metrics
The similarity join, an important data mining primitive, can be successfully applied to speeding up applications such as similarity search, data analysis and data mining. So far most of researches focus on the execution of high-dimensional joins over large amounts of disk based data. The increasing sizes of main memory available on current computers, and the need for efficient processing of spatial joins suggest that spatial joins for a large class of problems can be processed in main memory. Δ-tree is a novel multi-level index structure, it can speed up the high-dimensional query in main memory environment and has been proven to be an efficient index method. Each level in the Δ-tree represents the data space at different dimensionalities: the number of dimensions increases towards the leaf level which contains the data at their full dimensions. The remaining dimensions are obtained using principal component analysis. Using the properties of Δ-tree, a similarity join algorithm on the basis of index structure Δ-tree, Δ-tree-join, is presented. The top-down scheme can use fewer number of dimensions, compute the distances and efficiently complete join processing. Experimental results indicate that Δ-tree-join outperforms the state-of-the-art algorithm, EGO-join, and EGO\+*-join by a wide margin, and is an efficient similarity join method.
Generalization Error Bound for the Multi-Class Classification Algorithm Based on the Analytical Center of Version Space
Zeng Fanzi, Xiao Degui, Li Renfa, and Luo Juan
2009, 46(6):  1003-1008. 
Asbtract ( 569 )   HTML ( 0)   PDF (596KB) ( 444 )  
Related Articles | Metrics
Analytical center machine, based on the analytical center of version space, outperforms support vector machine, especially when the version space is elongated or asymmetric. While analytical center machine for binary classification is well understood, little is known about corresponding multi-class classification. Multi-class classification is a significant challenge theoretically and practically in the field of machine learning. The current multi-class classification method, one versus all, needs constructing classifiers repeatedly to separate a single class from all the others, which leads to daunting computation and low efficiency of classification. Though multi-class support vector machine corresponds to a simple quadratic optimization, it is not very effective when the version space is asymmetric or elongated. Thus, the multi-class classification approach based on the analytical center of version space, which corresponds to a simple quadratic constrained linear optimization, is proposed to address the above problems. At the same time, in order to validate its generalization performance theoretically, its generalization error upper bound is formulated and proved. Experiments on wine recognition and glass identification dataset show that the multi-class classification approach based on the analytical center of version space outperforms the multi-class support vector machine in generalization error.
BJUT-3D Large Scale 3D Face Database and Information Processing
Yin Baocai, Sun Yanfeng, Wang Chengzhang, and Ge Yun
2009, 46(6):  1009-1018. 
Asbtract ( 4734 )   HTML ( 38)   PDF (1875KB) ( 1448 )  
Related Articles | Metrics
3D face recognition has become one of the most active research topics in face recognition due to its robustness in the variation on pose and illumination. 3D database is the basis of this work. Design and construction of the face database mainly include acquisition of prototypical 3D face data, preprocessing and standardizing of the data and the structure design. Currently, BJUT-3D database is the largest Chinese 3D face database in the world. It contains 1200 Chinese 3D face images and provides both the texture and shape information of human faces. This data resource plays an important role in 3D face recognition and face model. In this paper, the data description, data collection schema and the post-processing methods are provided to help using the data and future extension. A 3D face data dense correspondence method is introduced. Dense correspondence means that the key facials points are carefully labeled and aligned among different faces, which can be used for a broad range of face analysis tasks. As an application, a pose estimation and face recognition algorithm across different poses is proposed. Eexpremental results show that the proposed algorithm has a good performance.
Modeling of Information System Survivability Analysis Based on SPN
Zhang Lejun, Guo Lin , Zhang Bing, Yang Wu , Wang Wei, and Yang Yongtian
2009, 46(6):  1019-1027. 
Asbtract ( 634 )   HTML ( 0)   PDF (1175KB) ( 525 )  
Related Articles | Metrics
The modeling method of information system survivability analysis based on stochastic Petri net is presented for system survivability design in this paper. First, network information system is divided into request modules, communication modules, processing modules, and storage modules for simplifying the SPN model according to its work flow. Next, formal description of system working flow is combined with survivability analysis modeling, and also described are the SPN modeling method of universal information system model, service disabled models, failure-recovery models,modules redundancy models, and survivability attribute models which include resistance, recognition, recovery and adaptation. Accordingly, Renew is used, which is an efficient SPN tool to exam and then puts forward relevant algorithms and programs which are realized by Java language to make quality and quantity analysis of system survivability. Finally, simulation experiment shows that this approach has more description ability and expansibility than the stochastic process algebra method. When there are some changes in the simulation system, the model can only change its corresponding modules to make a new experimental result. All the experiments prove correctness and effectiveness of the modeling method of information system survivability analysis based on SPN. This survivability analysis model can provide theoretical basis and guide for designing a survivable information system.
Study on Membership of Mixed Dependency Set in Strong Totally Ordered Temporal Scheme
Wan Jing, Wang Xiaoyu, and Hao Zhongxiao,
2009, 46(6):  1028-1035. 
Asbtract ( 371 )   HTML ( 0)   PDF (610KB) ( 439 )  
Related Articles | Metrics
In temporal databases, besides the storage redundancy and update abnormity associated with temporal functional dependencies, there also exist those associated with temporal multi-valued dependencies. In the strong totally ordered temporal scheme with temporal functional dependencies and regular temporal multi-valued dependencies constraints, the solution of the membership problem is essential to design an effective scheme decomposition algorithm. However, in the strong totally ordered temporal scheme, the introduction of multiple time granularities makes it more difficult to solve the membership problem. The concepts of attribute sets’ mixed closure on a certain temporal type, attribute sets’ mixed closure, attribute sets’ mixed dependency base on a certain temporal type and attribute sets’ mixed dependency base in strong totally ordered temporal scheme are proposed. The algorithms of attribute sets’ mixed closure, attribute sets’ mixed dependency base and the membership problem of temporal functional dependencies and regular temporal multi-valued dependencies in strong totally ordered temporal scheme are proposed. The algorithms’ termination and correction are proved and their complexities are discussed. These algorithms are the basis of further normalization of strong totally ordered temporal scheme.
Multi-Type Nearest Neighbor Queries with Partial Range Constrained
Sun Dongpu and Hao Zhongxiao,
2009, 46(6):  1036-1042. 
Asbtract ( 404 )   HTML ( 0)   PDF (1155KB) ( 485 )  
Related Articles | Metrics
Multi-type nearest neighbor (MTNN) query is used in real-ife applications more widely than the traditional nearest neighbor query. The concept of multi-type nearest neighbor query with partial range constrained (PCMTNN) is put forward, and the PCMTNN algorithm is proposed, which aims at the ranges constrained with arbitrary simple polygons. The range queries are used to get the qualified points among the datasets that have the constrained ranges; and then, the rest datasets and the new datasets which contain those qualified points are visited to obtain final results. Some characteristics of the minimal circumscribed rectangle of an ellipse are used in this algorithm. For instance, the minimal circumscribed rectangle of an ellipse can be figured out easily and the area which it contains is no better than the area covered with the same ellipse. The search area can be reduced by using these characteristics. A list structure is introduced, which contains two flags. One of the flags is used in downward pruning, and the combination of the two flags can be used in upward pruning. It travels an R tree only one time and can find all points in current different search areas. And it reduces the quantity of useless visited points significantly. Experimental results and analyses show that the algorithm has a better performance.
A Decomposition of the Weakly Invertible Linear Finite Automata
Yao Xinghua, Deng Peimin, Yi Zhong, and Jiang Yuncheng
2009, 46(6):  1043-1051. 
Asbtract ( 390 )   HTML ( 2)   PDF (703KB) ( 439 )  
Related Articles | Metrics
Composition of finite automata is a method of constructing new finite automata and is also a way of constructing public key in the finite automata public key crypt-systems. On the other hand, the study of the decomposition of weakly invertible finite automata is necessary, because it can help to analyze the structure of weakly invertible finite automata and solve its weak inverse. In this paper, firstly, it is proved that the weak isomorphism weakly invertible finite automata have similar decomposition. Secondly, a decomposition about a special weakly invertible linear finite automata (WILFA) M is considered, and a sufficient condition for the existence of this decomposition is found from the output weight of states. Thirdly, the decomposition is extended to the general WILFA, i.e. a WILFA with delay τ can be decomposed into a WILFA with delay 0 and a finite automata M\-D. That is because any WILFA is weakly isomorphic to the special WILFA M. Meanwhile, a sufficient and necessary condition for the existence about this decomposition is obtained. Finally, this sufficient condition for the existence of the decomposition is reflected on the algebra property of the output sequence, and it is partly converted into counting the rank of a matrix. For this decomposing form, the decomposed finite automata neednt be n-ary weakly invertible finite automata, and the corresponding condition is only related with the output sequence.
Approximating the Directed Minimum Degree Spanning Tree of Directed Acyclic Graph
Yao Guohui, Zhu Daming, and Ma Shaohan
2009, 46(6):  1052-1057. 
Asbtract ( 1214 )   HTML ( 0)   PDF (563KB) ( 753 )  
Related Articles | Metrics
Given a directed acyclic graph G=(V,E) with n vertices and a root vertex r∈V, the problem of constructing a spanning tree rooted at r whose maximal degree is the smallest among all spanning trees of G rooted at r is considered. An iterative polynomial time approximation algorithm for the problem is given. The algorithm computes a tree whose maximal degree is at most Δ\+*+1, where Δ\+* is the degree of some optimal tree for the problem. The running time of the algorithm is shown to be O(n\+2logn), where n is the number of vertices. The algorithm does not employ any exhaustive enumeration, and its actual running time is likely to be much smaller in practice. Computing low degree trees is a fundamental problem in computer science, and it has many applications. For example, on the Internet there are a large amount of mails and news, which need to be broadcast among the sites. One of the parameters that different sites may want to reduce is the amount of work done by their site. Broadcasting information on a minimum degree spanning tree is such a solution. The problem is also intuitively appealing due to its seeming simplicity.
A Heuristic Task Allocation Algorithm for Multi-Core Based Parallel Systems
Liu Yi, Zhang Xin, Li He, and Qian Depei,
2009, 46(6):  1058-1064. 
Asbtract ( 661 )   HTML ( 0)   PDF (661KB) ( 581 )  
Related Articles | Metrics
Parallel systems based on multi-core or many-core processors have more complicated structure and more tasks running on it than traditional uni-core based systems. To allocate tasks in this kind of systems efficiently, a task allocation model based on the task interaction graph is built, in which the processes, threads and communications among them are taken into account. And then an iteration-based heuristic algorithm is proposed based on the model. The algorithm is composed of two rounds of operations, in which the processes are assigned to processing nodes in the first round and the threads of a process are assigned to processing cores in the second round respectively. Each round of operation partitions the task interaction graph by iterations with backtracking. As a result, the final partitioned graph corresponds to the task allocation solution which assigns tasks to processing cores. The heuristic algorithm is evaluated by comparing it with exhaustive searching and genetic algorithms using 1400 random-generated task interaction graphs, and the results show that the proposed heuristic algorithm can find near-optimal solutions in reasonable time, and behave better in scalability than genetic algorithms when the number of threads increases, since it can find solutions in much less time than genetic algorithms.