Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 December 2008, Volume 45 Issue 12
Paper
Interconnection of Godson-3 Multi-Core Processor
Wang Huandong, Gao Xiang, Chen Yunji, and Hu Weiwu
2008, 45(12):  2001-2010. 
Asbtract ( 587 )   HTML ( 1)   PDF (2412KB) ( 655 )  
Related Articles | Metrics
The interconnection of Godson-3 multi-core processor adapts a 2D mesh based scalable distributed architecture, providing unified topology for chip level, board level, and system level design. HyperTransport protocol is implemented in Godson-3, both for IO connection and multi-chip interconnection. Software configurable routing arithmetic could provide efficient communication for board level CC-NUMA system or the NCC-NUMA system on a larger scale. Introduced in this paper the interconnecting network of Godson-3 multi-core processor. On the chip level, two-level scalable architecture is implemented inside Godson-3: a 2D mesh is used to connect all the nodes in the top level; two crossbars are used inside every node to connect 4 cores and 4 L2 caches, along with memory controllers and IO controllers. On the board level, the medium scale multi-chip system of CC-NUMA can be easily constituted by using cache coherence protocol supported HyperTransport interface interconnection. A larger scale multi-chip system can be constituted by using dedicated hardware for networking. Based on all these architectural supports, it becomes much easier to constitut an efficient and scalable shared memory multi-chip system.
Four Challenges in Transformative Research of Computer Systems
Xu Zhiwei, Li Peixu, and Zha Li
2008, 45(12):  2011-2019. 
Asbtract ( 402 )   HTML ( 0)   PDF (1183KB) ( 504 )  
Related Articles | Metrics
As we enter the 21st century, a profound transformation is emerging in the field of computer science and technology, this is also true for the subfield of computer systems. The main characteristic of this transformation is the leap from man-computer symbiosis to man-cyber-physical society (a tri-world of people, computers, and things). This raises four fundamental challenges to computer systems research: 1) we must revisit the very concept of “computation” and “computer system”, and we need to study new workload and usage pattern analytics; 2) we must investigate new metrics of computer systems; 3) we need to explore the possibility of new computer architecture that directly supports network; 4) we need to discover and make use of emergent phenomena. In this paper, based on research work done by oversea researchers and at the Institute of Computing Technology of the Chinese Academy of Sciences, we discuss the four challenges and point out that the answers are still lacking, which may lead to abundant research opportunities.
DOOC: A Software/Hardware Co-managed Cache Architecture for Reducing Cache Thrashing
Wu Junjie, Yang Xuejun, Zeng Kun, Zhang Baida, Feng Quanyou, Liu Guanghui, and Tang Yuhua
2008, 45(12):  2020-2032. 
Asbtract ( 643 )   HTML ( 1361)   PDF (4069KB) ( 390 )  
Related Articles | Metrics
Cache memory has become an essential part of modern processors to bridge the increasing gap between the CPU and the main memory speeds. Cache thrashing is one of the most severe problems that affect cache performance. Research shows that the traditional unified cache management by the hardware does not match the diverse memory access patterns in programs. This consequently leads to an enormous cache thrashing. The authors present data-object oriented cache (DOOC), a novel software/hardware cooperative cache. DOOC dynamically allocates isolated cache segments for different data-objects. Moreover, the segment capacity, associativity, block size, and cache coherence protocol implementation of each segment can be configured dynamically by the software. DOOC uses varied cache segments to match diverse data access patterns. The design and implementation of DOOC are discussed in detail together with the compiling techniques and data pre-fetching strategies. Also estimated is the hardware cost of DOOC with CACTI and a sample implementation on LEON3 processor through FPGA. The results show that the DOOC is hardware efficient. Likewise, the performance of DOOC is tested on both single-core and multi-core platforms through software simulation. Fifteen kernel benchmarks extracted from scientific applications, multimedia programs, and database management routines are tested on a single-core platform. Compared with a traditional cache, the DOOC achieves an average reduction of 44.98% in miss rate (the maximum is 93.02%) and an average speedup of 1.20 (the maximum is 2.36). Then the OpenMP version of NPB is run on a four-core platform and the results show an average reduction of 49.69% in miss rate (the maximum is 73.99%).
Research on User Behavior Trust in Trustworthy Network
Lin Chuang, Tian Liqin, and Wang Yuanzhuo
2008, 45(12):  2033-2043. 
Asbtract ( 575 )   HTML ( 0)   PDF (1558KB) ( 1318 )  
Related Articles | Metrics
With the increasing development of the computer network application, network security is facing the heavy challenge. The international research shows that network security is on the way to trustworthy network (TN). Apart from current security mechanism, the future TN adds behavior trust. TN includes the trust of service providers, the trust of the network information transmission and the trust of end-users. Trust based on user behavior not only can reduce or avoid the contact with the malicious user, but also can reduce the monitoring and prevention additional costs for mutual trust between the service providers and users, so research on user behavior trust will not only improve network security, but also improve overall network performance. In this paper, user behavior trust in trustworthy network is discussed. The authors systematically put forward a framework for evaluation, prediction and control of user behaviors, including reliable evaluation of user behavior trust, trust prediction to meet different prediction combinations of performance and security for service providers, trust and risk decision-making based on game theory, the mechanism based on the RBAC(role-based access control) model through the introduction of user behavior trust, simple and effective user behavior monitoring and prevention strategy based on user behavior trust, etc. Through effective combination of these user behavior trust management mechanisms, the unity of static and dynamic control and the unity of trust and risk, are implemented, which will lay the foundation for further study of the trustworthy network.
Is the Reliability of Web Services Related to the Change Rate of Operational Profiles
Bai Chenggang, Su Liang, Zhao Yingchun, Guo Junhong, and Cai Kaiyuan
2008, 45(12):  2044-2051. 
Asbtract ( 398 )   HTML ( 0)   PDF (1175KB) ( 527 )  
Related Articles | Metrics
In comparison with traditional software, the operational environment of Web services has been changed significantly. The operational environment of Web services is no long closed but open. The operational profiles under open environment are much more complex and unpredictable. It is an important property for the operational profiles under open environment to change continually over time. Because there are closed relationships between software reliability and operational profiles, this kind of varying operational profiles must affect the reliability. A natural hypothesis arises: the reliability of Web services is not only related to operational profiles, but should also be related to the change of operational profiles. As a first step, the authors studied this question by a case study. In this study, the derivative of operational profiles is used to indicate the change rate of operational profiles. Then based on the real Web services data, they did a number of statistics tests to test if the reliability of Web services is related to the change rate of operational profiles. In this paper, the contingency table analysis is used to do the independent tests. From the statistics tests, it comes to a conclusion that the reliability of the Web services depends on the changing rate of operational profiles significantly.
Complex Dynamical Networks and Their Applications in Software Engineering
Lü Jinhu, Wang Hongchun, and He Keqing
2008, 45(12):  2052-2059. 
Asbtract ( 562 )   HTML ( 0)   PDF (1684KB) ( 492 )  
Related Articles | Metrics
With the rapid development of information technology and life sciences, complex networks has rapidly become an interdisciplinary new hot research field over the past one decade. It is well known that the 21st century is the century of complex systems and complex networks. And the intensive study of complex networks has direct relationship with the daily life and the development of many important sub-sciences. How to improve the transmission efficiency and increase the degree of safety, credibility and stability for the large-scale complex networks? How to prevent the malice attack (e.g. computer virus and epidemics) and decrease the random error for the large-scale complex networks? The full and final solution for the above challenging problems depends on the development of the theory and technology of complex networks. This paper aims to briefly review the main research advances in complex networks and their typical applications in software engineering over the last decade, including modeling, synchronization, control, and networked software. The authors attempt to advance the crossing research between complex networks and software engineering.
Online Semi-Supervised Learning with Multi-Kernel Ensemble
Li Ming and Zhou Zhihua
2008, 45(12):  2060-2068. 
Asbtract ( 849 )   HTML ( 5)   PDF (1556KB) ( 720 )  
Related Articles | Metrics
In many practical real-time applications, prediction functions should be learned online upon the examples arriving in sequence. It is usually infeasible to label all the examples in the stream. However, most of the state-of-art online learning methods that tackle the real-time prediction problem work are not able to exploit the unlabeled data. In this paper, an online semi-supervised learning method based on multi-kernel ensemble is proposed, which enables online learning even if the received example is unlabeled. This method exploits the compatibility of multiple learners from different RKHS over the unlabeled data, based on which a regularized instantaneous risk functional is derived. Online convex programming is employed to minimize the derived risk. The experimental results on UCI data sets and the application to network intrusion detection show that the proposed method can effectively exploit the unlabeled data to improve the performance of online learning.
Research and Design of High Performance Interconnection Network Switch
Wang Dawei, Cao Zheng, Liu Xinchun, You Dingshan, and Sun Ninghui,
2008, 45(12):  2069-2078. 
Asbtract ( 596 )   HTML ( 1)   PDF (1496KB) ( 623 )  
Related Articles | Metrics
High performance interconnection network switch plays a critical role in high performance computing (HPC) systems. As upper layer applications of the HPC, scientific computations demand not only low latency and high bandwidth of switch, but also hardware support of collective communications, such as broadcast, multicast, and barrier, etc. HyperLink switch, the core component of Dawning 5000 interconnection networks, has 38.4ns single stage latency and 160Gbps aggregated bandwidth, furthermore it supports 16 multicast groups and 16 barrier groups simultaneously. In the ideal condition, 1024 nodes can finish multicast and barrier operations within 2μs, which greatly improves the performance of scientific application. A cycle-accurate switch model is also built to evaluate switch performances. The simulation proves that 3 virtual channels are the best performance-cost choice for 16-port input-buffered switch, and that 4KB input buffer is sufficient for 1KB MTU switch to achieve the highest unicast throughput. A comparison between multi-rail networks and single-rail networks which have the same bandwidth as multi-rail networks is also given in theoretical analyses. It is shown that the former could effectively minimize the network latency, and thus provides much higher network throughput than the latter. The LogP model is employed to evaluate HyperLink multicast and barrier performances, which shows that the HyperLink switch has good scalability, easily supporting up to thousands of nodes.
Load-Balancing Routing Based on Path Metric for Multi-Channel Wireless Mesh Networks
Ren Juan and Qiu Zhengding
2008, 45(12):  2079-2086. 
Asbtract ( 402 )   HTML ( 0)   PDF (1440KB) ( 423 )  
Related Articles | Metrics
Multi-hop infrastructure wireless mesh networks offer increased reliability, coverage, and reduced equipment costs over their single-hop counterparts, wireless local area networks. Equipping wireless routers with multiple radios further improves the capacity by transmitting over multiple radios simultaneously using orthogonal channels. A good channel-diverse routing is essential for throughput optimization of mesh clients and can alleviate potential congestion on any gateways to the Internet, thereby improving per-client throughput. In order to efficiently utilize the multiple channels in wireless mesh networks, a new path metric ERC(expected residual capacity) is designed, which considers multiple factors including channel-diversity, bandwidth and interference. The ERC path metric gives a good evaluation of path quality and can be used to choose paths with high throughput and low interference. Combined with the idea of multipath routing, a corresponding load-balancing routing for the multi-channel wireless mesh networks is proposed. The routing algorithm seeks several optimum paths for each traffic and then distributes the traffic among those paths as balanced as possible, which improves the communication qualities effectively. The simulation results show that by using the proposed load-balancing routing, the aggregate network throughput surpasses the single channel networks far away and obviously outperforms the common multi-channel routing. Besides, the network delay and the packet drop probability also get significant decrease.
ASR: An Adaptive Secure Routing Protocol in MANET
Huang Qingyuan, Zeng Yingzhi, and Su Jinshu
2008, 45(12):  2087-2094. 
Asbtract ( 536 )   HTML ( 0)   PDF (832KB) ( 481 )  
Related Articles | Metrics
The design of secure routing protocol is one of the most critical issues in mobile ad hoc networks(MANET). In MANET, there exists a novel threat where the malicious nodes work in an abnormal manner: they participate in the route discovery phase properly as the regular nodes, while discarding the arriving data packets in the forwarding phase. So far, most of previous research literatures moderately focus on the issues of protecting routing messages, and little work has been put on dealing with this kind of threat. Therefore, it brings great challenges in designing secure routing protocols to react with this threat. In this paper, ASR, a novel adaptive secure routing protocol for mobile ad hoc networks is proposed. In ASR, a node first identifies several paths through the normal routing discovery phase. And then, it selects fault free paths via a reinforcement learning mechanism. Thereafter, a node reputation management mechanism is adopted according to the evidence from reinforcement learning. Finally, in-depth analysis is performed and extensive simulations are conducted in the NS2 simulation environment. Analysis and simulation results show that the proposed adaptive routing protocol is able to work effectively defending itself against data forwarding attacks, and it also performs efficiently in the hostile environment.
A Fuzzy-Based Randomness Evaluation Model for Block Cipher
Fan Limin, , Feng Dengguo, and Zhou Yongbin
2008, 45(12):  2095-2101. 
Asbtract ( 528 )   HTML ( 0)   PDF (734KB) ( 483 )  
Related Articles | Metrics
Evaluation plays an important role in security of cryptology, among which randomness is one of the most practical contents. There exist several test methods and software packages for randomness test now. But there isnt an integrated and applied quantitative evaluation model for manipulating the vast results at present. In this paper, randomness evaluation of cryptography is studied and block cipher is selected as a research instance. A tree-type index system for randomness is proposed by analyzing design principles of block cipher, and an evaluation model is built based on fuzzy multi-criteria decision-making. In this model, membership function is used to process randomness result, which can express the continuous and gradual character of randomness and can overcome the problem of information loss introduced by threshold method. This model has some advantages such as offering an effective method to quantitatively evaluate the randomness of block cipher, and providing a foundation of comprehensive evaluation of cryptography. The analysis also shows that the model is practical because its consumption of space and time is very low. Furthermore the model provides a general evaluation process for single index and attribute, and it can be easily modified to deal with the case of other fundamental types of cryptographic primitives, such as stream cipher.
Semantic Web Service Matching Based on Dynamic Description Logic
Peng Hui, Chen Limin , Chang Liang, and Shi Zhongzhi
2008, 45(12):  2102-2109. 
Asbtract ( 580 )   HTML ( 0)   PDF (718KB) ( 553 )  
Related Articles | Metrics
Dynamic description logic (DDL) is an extension of description logic (DL) with a dynamic dimension. In addition to the reasoning mechanism on static knowledge about application domains, DDL also provides a mechanism for representing and reasoning about actions by embracing knowledge of actions into DL. Therefore, DDL is a promising candidate for logic foundations of semantic Web service when every Web service is regarded as an action on Web. Due to such merits provided by DDL, the authors present a DDL-based approach for the description and matching of semantic Web services: Both the goal service of a service consumer and the atomic services from service provider are described in terms of actions of DDL. Then the matchmaker matches the goal service with supplied services by reasoning on actions. The match problem between goal service and supplied services is reduced to the satisfiability problem of formulas in DDL. Compared with the semantic Web service match method based on DL, the DDL based method describes both static information and actions on Web in a uniformal way and the reasoning problem on actions can be reduced to the satisfiability problem of formulas. Compared with the first-order predicate logic, which is often used in action reasoning in situation calculus, the satisfiability problem of formulas in DDL is decidable.
Research Progress in Statistical Relational Learning
Liu Dayou, Yu Peng, Gao Ying, Qi Hong, and Sun Shuyang
2008, 45(12):  2110-2119. 
Asbtract ( 875 )   HTML ( 1)   PDF (946KB) ( 1517 )  
Related Articles | Metrics
Interest in statistical relational learning (SRL) has grown rapidly in recent years. SRL integrates the relational or logical representations, probabilistic reasoning mechanisms with machine learning, and it can solve many complicated relational problems in real world. It has important applications in many fields such as World Wide Web, social networks, computational biology, information extraction, computer vision, speech recognition etc. In the past few years, SRL has received a lot of attention and a rich variety of approaches have been developed by many researchers, and they have different relational or logical representations, probabilistic reasoning mechanisms or machine learning principles. The goal of this paper is to provide an introduction to and an overview of these works. First the research fields and different tasks of SRL are introduced and summarized. And then an introductory survey and overview of the SRL approaches is provided, and the approaches are classified into four families, which are the approaches based on Bayesian networks, stochastic grammars, Markov networks, and (hidden) Markov models, according to probabilistic representations and reasoning mechanisms. For each approach family, the probabilistic logical models, parameter estimation and structure learning, and the states-of-the-art are introduced. Finally, the current problems in SRL are discussed and future research directions are pointed out.
Extending Conformant Fast-Forward to Deal with Disjunctive Goals
Yang Yupeng, Ouyang Dantong, Cai Dunbo, and Lü Shuai
2008, 45(12):  2120-2128. 
Asbtract ( 571 )   HTML ( 0)   PDF (870KB) ( 387 )  
Related Articles | Metrics
Conformant Fast-Forward that handles certain goals is extended to a new system that can deal with uncertain goals, and the new planning system implemented is called Conformant-FF-d that can deal with disjunctive goals. Disjunctive goals are one of the expressions of uncertain goals, and disjunctive goals are widely used in expressing uncertain goals in planning domains. Some key components of Conformant-FF-d system include goal state recognition, reachability analysis and the heuristic function for disjunctive goals. A method that uses a SAT technique to recognize the goal states efficiently is proposed. In order to do reachability analysis, an algorithm based on relaxed planning graph and implication graph is developed, which prunes the belief states that can never reach the goals. The method of reachability analysis reduces the size of the searching space, so the plan can be found in a relatively small searching space other than the whole space. A new heuristic function suitable for disjunctive goals is defined, and the function guides the search algorithm towards goal states effectively. Conformant-FF-d and the POND system are tested on domains which are used in the international planning competition. Experimental results show that Conformant-FF-d can deal with uncertain goals expressed in disjunctive form and outperforms the POND system in both efficiency and scalability.
A Survey of Computational Method in Protein-Protein Interaction Research
Li Zhoujun, , Chen Yiming, , Liu Junwan, , and Chen Huowang
2008, 45(12):  2129-2137. 
Asbtract ( 588 )   HTML ( 0)   PDF (963KB) ( 1183 )  
Related Articles | Metrics
With molecular biology research coming into the post-genome era focusing on proteomics, protein-protein interaction (PPI) has become an important topic of proteomics. The computational methods are widely used to analyze PPI data and guide experimental design for biologists, because they only need lower cost and have a shorter experimental period. In two aspects of constructing PPI network and analyzing it, all kinds of computational methods are surveyed. When it comes to constructing PPI network, some technologies, which use machine learning methods, are introduced, such as predicting PPI from protein sequence, expression or PPI network, mining PPI from biological medical literature database, evaluating PPI from various PPI data sets and so on. For PPI network analysis, three important and typical network parameters, (network diameter, degree distribution and cluster coefficient of node), and four models are illustrated in detail. Two PPI network modules with biological significance are graph clustering and network motif, and corresponding algorithms which use graph theory methods, are deeply analyzed. The concept and method about network alignment is also introduced. Finally, the computational research of protein-protein interaction is summarized and prospected.
Extraction of Discriminant Features Based on Optimal Transformation and Cluster Centers of Kernel Space
Zhang Hongyi, , Zhang Junying, and Zhao Feng
2008, 45(12):  2138-2144. 
Asbtract ( 518 )   HTML ( 0)   PDF (719KB) ( 411 )  
Related Articles | Metrics
As is known, kernel method is one of the most important research points in statistical learning theory, and the kernel-based learning methods are attracting extensive research interests. It has been proved that a lot of linear feature extraction methods can be generalized to the nonlinear learning methods by using kernel methods. In this paper, a new nonlinear learning method of optimal transformation and cluster centers (OT-CC) is presented by using kernel technique. Data are mapped into the high dimensional kernel space via nonlinear transformation from original space, and then the learning method of optimal transformation and cluster centers is applied for feature extraction. However, the kernel function is utilized in result resolving process so that the complex expression of nonlinear transformation is avoided. The novel method is named optimal transformation and cluster center algorithm of kernel space (KOT-CC), which is a powerful technique for extracting nonlinear discriminant features and is very effective in solving pattern recognition problems where the overlap between patterns is serious. A large number of experiments demonstrate that the new algorithm outperforms OT-CC and kernel Fisher discriminant analysis (KFDA) in ability for extracting nonlinear discriminant features and computation complexity. Furthermore, the problem of “course dimensionality” is avoided.
Motion Retrieval Based on Large-Scale 3D Human Motion Database by Double-Reference Index
Xiang Jian, Guo Tongqiang, Wu Fei, Zhuang Yueting, and Ye Lü
2008, 45(12):  2145-2153. 
Asbtract ( 452 )   HTML ( 0)   PDF (1055KB) ( 494 )  
Related Articles | Metrics
In this paper, a novel approach is presented for motion retrieval based on double-reference index that reduces the number of costly distance computations for similarity measure. In order to retrieve motion data accurately, the features about joint positions, angles and velocities are extracted to represent motion data. Since these original features of motion clips lie in high-dimensional space and on a high-dimensional manifold, which is highly contorted, the Isomap nonlinear dimension reduction is used to map them into low-dimensional manifold. However, geo-distance of Isomap is only defined on training sets and raw Isomap cannot map new samples to the embedding subspace because it requires a whole set of points in the database to calculate geo-distance. For handling out-of-samples motion data, Isomap is generalized based on the estimation of underlying eigenfunctions. Then the methold of double-reference index(DRI) is proposed based on selecting a small set of representative motion clips in the database. So it can get candidate set by abandoning most unrelated motion clips to reduce the number of costly similarity measure significantly. Finally the automatic method is tested on a large collection of motion capture clips with a large variety of actions. Experiment results show that the proposed methods are effective for motion data retrieval in large-scale database.
A Decision Algorithm on Judging the Overlap of Nodes for R*Tree Based on Clustering Analysis
Li Bohan and Hao Zhongxiao,
2008, 45(12):  2154-2161. 
Asbtract ( 529 )   HTML ( 1)   PDF (973KB) ( 393 )  
Related Articles | Metrics
Clustering analysis can implement the clustering partitions for a large amount of spatial objects and optimize the nodes of the R\+*tree. The superiority of R\+*tree holds for different types of queries and operations for both rectangles and multidimensional points with effect and efficiency. Since the query efficiency of R\+*tree is mainly impacted by the factor of overlap and overlay, according to the forced reinsertion principle of the R\+*tree and based on the clustering analysis, an algorithm which applies the diagonal line segment pairs intersection is presented to judge the overlaps among the clustering nodes. The novel algorithm overcomes the existing problems of only changing the bounding form or shrinking the orthogonal bounding rectangle in solving the overlaps among nodes in most index research. With the decision algorithm, the number of iterations in clustering algorithms, can be controled, and the influence of the noise point on the clustering can be decreased. The theoretical analysis concludes that the time complexity of the algorithm is the magnitude of O(nlogn),where n is the number of the diagonal line segment pairs. Experimental results indicate that the query efficiency based on the R\+*tree, which uses the decision algorithm, outperforms that of the gain/loss-metrics-applied greedy algorithm based on the R\+*tree and the original algorithm based on SR-tree in spatial range query.
Fast Algorithms for Synthesis of Quantum Reversible Logic Circuits Based on Hash Table
Li Zhiqiang, Chen Hanwu, Xu Baowen, and Liu Wenjie,
2008, 45(12):  2162-2171. 
Asbtract ( 579 )   HTML ( 0)   PDF (930KB) ( 517 )  
Related Articles | Metrics
Quantum reversible logic circuits are basic elements of quantum computer. The quantum computer can be constructed by cascading and combining the quantum gates. Synthesis of quantum reversible logic circuits automatically constructs the desired quantum reversible logic circuits with minimal quantum cost. By absorbing all kinds of ideas of synthesis of reversible logic circuits, a novel and efficient algorithm is presented, which can construct optimal quantum reversible logic circuits with various types of gates and quantum costs by constructing minimal perfect Hash function. A universal algorithm is proposed, which can automatically construct all kinds of quantum gate libraries through the permutations of quantum lines, to ensure the realization of the automation of quantum circuit synthesis. Judging by the internationally recognized reversible functions of three variables, the algorithm presented not only synthesizes all optimal reversible logic circuits, but also runs extremely faster than other ones. The experimental results show that the average speed of the algorithm, which synthesizes circuit with minimum length or at minimum cost, is 49.15 and 365.13 times that of currently best result respectively.