Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 June 2007, Volume 44 Issue 6
Paper
A Fault-Tolerant Asymmetric DHT Method Towards Dynamic and Heterogeneous Network
Zhang Sanfeng and Wu Guoxin
2007, 44(6):  905-913. 
Asbtract ( 447 )   HTML ( 0)   PDF (586KB) ( 493 )  
Related Articles | Metrics
Designing DHT (distributed Hash tables) method with optimized “degree-diameter” tradeoff and fitting dynamic heterogeneous Internet better is focus of structured P2P. A novel DHT method called A-DHT is presented. Nodes of A-DHT network have asymmetric degrees, which mean powerful nodes have larger degree for their higher access bandwidth, lower access delay and more stable user behavior. A-DHT builds its asymmetric topology based on hyper-deBruijn graph. Making use of power of fat nodes, A-DHT achieves better average degree-diameter tradeoff with controllable congestion. A-DHT also gets better fault tolerance performance than DHT methods based on alphabets by using edges of lean nodes. Static topology, routing algorithm of A-DHT and P2P network building method based on A-DHT are described. Theoretical analysis and simulation results indicate that A-DHT can reduce path length and latency at low network load, avoid overload of fat node at high network load, and achieve better fault tolerance at both two situations.
A Protocol of Fault Diagnosis Agreement Based on Invalid Link
Dong Jian, Zuo Decheng, Liu Hongwei, Yang Xiaozong, and Ren Xiao
2007, 44(6):  914-923. 
Asbtract ( 505 )   HTML ( 0)   PDF (684KB) ( 493 )  
Related Articles | Metrics
Fault diagnosis agreement (FDA) can maintain the performance and integrity of highly reliable distributed systems. However, most of previous FDA protocols only take into account simple network with single faulty component. It is more important to study complicated network with faulty nodes and faulty links for real distributed applications. Unfortunately, the diagnosis of malicious (Byzantine) fault components can not satisfy FDA in this situation because of the arbitrariness of its behavior. Thus, the model of invalid link is proposed firstly in this paper, which can more accurately describe the effect of malicious faulty component under network with dual faulty components, and improve fault diagnosis coverage. Afterwards, based on the invalid link model, an evidence-based fault diagnosis protocol, PLFDA, is presented. PLFDA collects the messages which have accumulated in a Byzantine agreement protocol as evidence and then diagnoses the set of faulty components by examining the collected evidences. It can not only detect and locate simultaneously both faulty nodes and faulty links, but also satisfy requirements of FDA in a synchronous fully connected network, where the number of allowable faulty components is not greater than n?2-1, of which the number of allowable faulty nodes is less than or equal to (n-1)?3. In addition, the proof of correctness and complexity of PLFDA and experimental results are given in the end.
An Optimized Flooding Algorithm Based on Multipoint Relaying for Mobile Ad Hoc Networks
Shi Wei, Li Shanping, and Yang Zhaohui
2007, 44(6):  924-931. 
Asbtract ( 338 )   HTML ( 0)   PDF (449KB) ( 523 )  
Related Articles | Metrics
It is proved that multipoint relaying (MPR) is an efficient strategy for on-the-fly flooding in mobile ad hoc networks. Upon receiving a flooding packet, each relaying node designated by the packet selects nodes from its neighbors for next packet relay. All the selected relaying nodes finally form a connected dominating set (CDS) for the network. Selecting fewest neighbors as relaying nodes to cover all the nodes within 2-hop neighborhood is the key issue for MPR. However, existing MPR-based flooding algorithms neglect the impact of common adjacency relation of relaying nodes. The analysis of connecting topology of relaying nodes indicates that the cardinality of uncovered 2-hop neighbors can be further reduced, which means less redundant relaying nodes. Nodes that are adjacent to several relaying nodes only need to be covered by just one of them. As a result, the workload of other relaying nodes would be reduced. Meanwhile, self-pruning is involved to improve MPR's performance. That is, selected nodes can decide whether to relay a packet or not all by themselves. According to these facts, the optimized flooding algorithm based on eliminating common adjacency relation with self-pruning (ECARSP) is proposed. Both theoretical analysis and experiment results show that ECARSP outperforms the existing flooding algorithms in terms of the number of relaying nodes and network workload.
Research on a Mobility Model Based on Circle Movement in Ad Hoc Network
Wang Wei, Cai Wandong, Wang Beizhan, Li Yongjun, and Tian Guangli
2007, 44(6):  932-938. 
Asbtract ( 412 )   HTML ( 0)   PDF (448KB) ( 537 )  
Related Articles | Metrics
Ad hoc network is not deployed on a large scale, and researches in this area are mostly simulation based on mobility models. The research on node space probabilistic distribution of mobility model plays a very important role in studying many relevant characteristics of ad hoc network. However, there are many drawbacks on existing mobile models, e.g. unrealistic movement scenarios, non-uniform distribution, low connectivity, etc. On the basis of analysis of these models, a novel mobile model based on circle movement is proposed and the space probability distribution of this mobile model is deeply investigated. By the analyses of the probability and average length of the events that contribute to the node space distribution, the exact equation of the node asymptotically stationary distribution for movement is derived on a 2D region. Its advantages such as uniform distribution are validated through simulations by various settings of the mobility parameters. Simulation results show that this new model avoids the above drawbacks. Given a deep understanding of the behavior of this new model, the research results are of practical value and provide exact theoretical model for simulation and evaluation of ad hoc network performance.
Topology Discovery with Smallest-Redundancy in IPv6
Yang Liu, Li Zhenyu, Zhang Dafang, and Xie Gaogang
2007, 44(6):  939-946. 
Asbtract ( 464 )   HTML ( 0)   PDF (494KB) ( 521 )  
Related Articles | Metrics
The importance of network management is more and more apparent, along with the quick evolution of network technology. Correct network topology is the basis of network management. IPv6 has been recognized as the next generation Internet protocol. However, its large address space and special features bring new challenges for topology discovery. Nowadays, the topology discovery based on ICMP can be divided into distributed method and centralized method, both of which produce the probing redundancy inevitably for probing actively. The distributed topology discovery is difficult to deploy, and its cost is very high. It also has a lot of limitations in redundancy reduction, due to inter-redundancy. Thus, it can't discover topology in a network-friendly manner. Since routers in IPv6 support the source routing function, the centralized topology discovery method can cover the cross link. In this paper, the intra-redundancy generated by one monitor probing is measured, and then a centralized topology discovery method with smallest-redundancy is proposed in IPv6 environment. Based on the backward algorithm, an improved algorithm is presented to reduce redundancy in practical network environment. Meanwhile, the centralized topology discovery method based on source routing in IPv6 is proved feasible. The experiment results show that this method can reduce redundancy of nodes near the monitor by two orders of magnitude, and obtain satisfactory coverage as well.
A Generic Access Control Administration Model
Li Xiaofeng, Feng Dengguo, and Xu Zhen
2007, 44(6):  947-957. 
Asbtract ( 412 )   HTML ( 0)   PDF (549KB) ( 542 )  
Related Articles | Metrics
Current access control administration models that are designed to manage given access control models are not suitable for enterprise environment in which different access control models coexist. An administration model is needed for efficiently administrating different access control models in enterprise environment. The main reason why an administration model can't be used to manage other access control models is that the administration scopes defined in the model include characteristic components of the given access control model. This paper uses subject and permission that are common in different access control models to describe administration scope, abstracts interface between administration model and access control model to policy\+* functions and proposes a generic administration model. The model introduces the concept of management space that corresponds with real enterprise structure and makes the model easily understood by managers, and the administration tasks are achieved hierarchically. For autonomy, the model differentiates the direct manager's administration privileges from the indirect manager's administration privileges of one management space. Also discussed are the administration rules and semantics of the model. The model soundness is proved, and policy\+* algorithms of RBAC, MAC and HRU are analyzed. This model can be used to administrate different access control model in an enterprise environment. An example is given, which explains how to use this model to manage RBAC, MAC and HRU.
Modeling and Analysis of Active-Benign Worms and Hybrid-Benign Worms
Zhou Hanxun and Zhao Hong
2007, 44(6):  958-964. 
Asbtract ( 348 )   HTML ( 0)   PDF (399KB) ( 556 )  
Related Articles | Metrics
Since the Morris worm occurred in 1988, worms have threatened the network persistently, the traditional anti-virus technologies no longer scale to deal with the worm threat, and benign worms become a new active countermeasure. The idea of benign worm is to transform a malicious worm into an anti-worm which spreads itself using the same mechanism as the original worm and immunizes a host. This method allows for an active measure to malicious worms that can potentially be deployed with no additional infrastructure in place. First of all, an active-benign worm and a hybrid-benign worm are classified into three sub-types, respectively. Then, three sub-types of the active-benign worm and the hybrid-benign worm are modeled respectively based on the two-factor model, and the models of six types of benign worms are derived under the circumstances of no delay time and of delay time. Finally, the simulation validates the models. Furthermore, the effect of each type containing the spread of worms is discussed based on the results. And there comes the conclusion that a composition-hybrid-benign worm is the most effective approach for containing the propagation of worms under the same infectious condition.
Hybrid Sensor Networks Deployment Based on Virtual Force
Zhou Tong, Hong Bingrong, and Piao Songhao
2007, 44(6):  965-972. 
Asbtract ( 412 )   HTML ( 0)   PDF (468KB) ( 623 )  
Related Articles | Metrics
Most existing researches on sensor networks consider networks where all sensors are static nodes or mobile nodes. To ensure good performance, sensor networks should have self-deploying and self-healing capability to handle coverage holes caused by random locations and sensor failures. However, a mobile sensor has much higher cost than a static sensor with similar sensing capability, and deploying only mobile sensors in the network can cause the sensor cost too high. To improve the coverage performance in a sensor network while keeping the sensor cost low, it is proposed to intentionally add many mobile sensors to a number of static sensors in a sensor network. Mobile sensors can improve network performance by moving to locations where there is a coverage hole. Thus, mobile sensors can essentially provide self-healing and self-optimizing capabilities in sensor networks. A hybrid sensor neuwork is composed of static nodes and mobile nodes. A novel mobile nodes deployment method based on virtual force among nodes is presented in order to deploy these mobile nodes for forming maximum coverage of sensing area. The effect forces resulted from virtual potential fields between these nodes are utilized to control the movement of mobile nodes. This way makes mobile nodes move to appropriate positions using a little energy consumed in allowable time. The feasibility of the algorithm is analyzed in theory. Its validity is verified by numeric simulation and the performances are compared with that of other three similar algorithms.
Asymptotical Stability Analysis for Recurrent Neural Networks with Time-Varying Delays
Zhang Zhong and Li Chuandong
2007, 44(6):  973-979. 
Asbtract ( 419 )   HTML ( 0)   PDF (412KB) ( 736 )  
Related Articles | Metrics
When the neural network applies to optimal calculation, the ideal situation is that there is a unique equilibrium point which is globally asymptotically stable and the neural network tends to the equilibrium point. The problem of the globally asymptotical stability of recurrent neural networks with time varying delay is investigated. By transforming the delayed neural model to the describer model and then employing the Lyapunov-Krasovskii stability theorem, linear matrix inequality (LMI) technique, S procedure, and some algebraic inequality method, a new sufficient condition is derived, which is determined by the coefficients of the model and includes more tuning parameters for determining the globally asymptotical stability of recurrent neural networks with time-varying delay. The condition is easily verified numerically by the interior-point algorithm for convex quadratic programming because it can be changed as a set of linear matrix inequalities. The proposed result is further applied to two special cases: cellular neural network model with time delay and recurrent neural networks with constant delays. It is shown by theoretical analysis and computer simulations that the presented results provide several new sufficient conditions for the asymptotical stability of the investigated delayed neural network model.
An Intelligence-Based Model and Methodology of Multiple Agent System
Li Chaoming and Su Kaile
2007, 44(6):  980-989. 
Asbtract ( 480 )   HTML ( 1)   PDF (569KB) ( 434 )  
Related Articles | Metrics
For obtaining better architecture patterns to use pattern-driven approaches in agent-oriented software engineering (AOSE) and design of multiple agent system (MAS), the intelligence-based idea is presented to integrate the advantages of PO, OO and AO by intelligent information. Meanwhile, a new methodology named IB is presented, which covers all the processes from affair analysis to agent organization and system implementation for practicing the IB idea. It must be pointed out that the IB methodology could be described by formal methods, which insure that it could be programmed as a programming environment in the computer. In other aspect, the integrality of IB methodology shown by covering all the processes insures that it could be used in the industry. The two elements mentioned above insure that the software architecture pattern of MAS could be concluded by IB methodology.Furthermore, a model of intelligence-based intelligent system (MIBIS) is presented and IB is described by the construction of the model. Finally, a machine-human cooperation system is completed as the practicing of what are mentioned above. In that system, people could use computer as their assistant in the match. It is shown that the system model has good practicality, dynamic organizability, reusability, expansibility, real-time, performance and generality.
An Algebra for Skyline Query Processing Data Cube
Huang Zhenhua and Wang Wei
2007, 44(6):  990-999. 
Asbtract ( 686 )   HTML ( 0)   PDF (608KB) ( 397 )  
Related Articles | Metrics
Skyline query processing has recently received a lot of attention in database community. This is mainly due to the importance of skyline result in many applications, such as multi-criteria decision making, data mining and visualization, and user-preference queries. Presently, all the methods get the skyline set by directly executing query algebra operations on the original tables. However, these methods will not be applicable at all when the cardinality of the original tables and the number of dimensions become larger. Motivated by these facts, the query algebra operations on the skyline sets are first studied. The algebra operations only need the input of the skyline query processing to be the skyline sets whose size are much smaller than the original tables. A formalized model is also first proposed, which brings the set of multiple dimensional objects and the result set of skyline query together. And the instances of this formalized model can be used to study the query algebra operations on the skyline sets. Moreover, the cost model of the data model and query algebra operations is proposed. Extensive experiments demonstrate that the data model and query algebra operations are both efficient and effective.
A Window Join Optimization Algorithm Based on Minimum Spanning Tree
Qian Jiangbo, Xu Hongbing, Dong Yisheng, Wang Yongli, Liu Xuejun, and Yang Xuemei
2007, 44(6):  1000-1007. 
Asbtract ( 430 )   HTML ( 0)   PDF (524KB) ( 509 )  
Related Articles | Metrics
Different from traditional database systems, a data stream management system (DSMS) mainly processes concurrently continuous queries. Because queries would be added or deleted at any moment, the focus of query optimization for a DSMS is to find an algorithm that adapts to add and delete queries frequently. Furthermore, as window join is one of the highest cost operations of continuous queries, window join optimization will notably improve a DSMS performance. Therefore, a window join optimization algorithm is proposed. For each new query, all the probing sequences of each data stream are built like minimum spanning tree algorithm. To complete concurrent window joins, the probing sequences which start from a same stream should be built into one tree whose root node is the stream. If prefixes of probing sequences are the same, they can be grouped into one branch of the tree to reduce the duplicate calculation. Because the algorithm does not execute cost comparison of grouping common join predicate, adding or deleting a query does not influence other query plans, and complicated dynamic plan migration is not needed. Theoretics and experimental results show that the algorithm is simple and optimization performance is acceptable.
Indexing the Past, Present and Future Positions of Moving Objects in Urban Traffic Networks
Chen Jidong, Hu Zhizhi, Meng Xiaofeng, and Wang Ling
2007, 44(6):  1008-1014. 
Asbtract ( 364 )   HTML ( 1)   PDF (406KB) ( 508 )  
Related Articles | Metrics
Advance in wireless sensor networks and positioning technologies enable new data management applications that monitor continuous streaming data. In these applications, efficient management of such data is a challenging goal due to the highly dynamic nature of the data and the need for fast, on-line computations. An efficient indexing structure for moving objects is necessary for supporting the query processing of these dynamic data. Existing work can not index the past, current and future positions of moving objects at the same time. In this paper, a novel index technique is proposed to support querying the past, present and future positions of moving objects in urban traffic networks. First, a simulation based location prediction model for the vehicle future trajectory is presented, which is more accurate than the traditional linear prediction model in the TPR-tree. Moreover, exploiting the feature of traffic networks, it presents a dynamic structure termed AU (adaptive unit) and develops it to an R-tree based index named current-AU. Finally, by naturally extending the AU, the past-AU is proposed, which is capable of indexing historical trajectory and at the same time avoiding the dead space that is inevitable in the TB-tree. Experimental studies indicate that the AU-index outperforms the traditional TPR-tree and TB-tree.
An Efficient Prediction Technique for Range Aggregation of Moving Objects
Liao Wei, Jing Ning, Zhong Zhinong, and Chen Hongsheng
2007, 44(6):  1015-1021. 
Asbtract ( 344 )   HTML ( 0)   PDF (420KB) ( 458 )  
Related Articles | Metrics
Predictive range aggregate (PRA) queries are one of the important researching areas in the moving object database. In this paper an efficient prediction technique, PRA-tree, is presented for range aggregation of moving objects. PRA-tree splits the velocity domain regularly, and classifies moving objects into different velocity buckets by their velocities. Then a TPR-tree, which is based on the TPR-tree structure and added with aggregate information in intermediate nodes, is used to index the moving objects in each buckets, thus reducing the disk accesses of PRA queries. A PRA-tree is supplemented by a hash index on leaf nodes, and uses bottom-up delete algorithm, thus having a good update performance and concurrency. Also developed for the PRA tree is an enhanced predictive range aggregate (EPRA) query algorithm which uses a more precise branch and bound searching strategy, reducing the disk I/O greatly. Experimental results and analysis show that the EPRA algorithm for PRA-tree has a good query performance and outperforms the popular TPR\+*tree index.
A Gradient-Based Robust Method for Estimation of Fingerprint Orientation Field
Mei Yuan, Sun Huaijiang, and Xia Deshen
2007, 44(6):  1022-1031. 
Asbtract ( 636 )   HTML ( 2)   PDF (1424KB) ( 1057 )  
Related Articles | Metrics
Automatic fingerprint identification system (AFIS), which is one of the most important biometric authentication, has been extensively studied and good performance on small database is obtained, but there still exist some critical issues such as long processing time on large databases and low matching rate on poor image. To solve these problems, improvements of fingerprint classification and identification are needed. As a global feature of fingerprint, orientation field which describes the local direction of the ridge-valley pattern plays a very important role in both topics mentioned above. Many fingerprint orientation estimating methods based on gradient have been proposed, but their results are not very satisfactory, especially for poor images. In this paper, a gradient based robust method for estimation of fingerprint orientation fields is proposed. This new method mainly comprises three steps: firstly, normalize the point-gradient vectors and calculate the block-gradient vectors and the corresponding block-coherence; then detect the noisy areas to eliminate the side effect of noise diffusing; finally, re-estimate all block-gradient vectors based on iteration and transform the gradient field to orientation field. Compared with the previously proposed gradient-based methods, experiments conducted on FVC 2000 and FVC 2004 show that the proposed method is more accurate and more robust against noise, and is able to predict orientation field within the large noisy areas.
Extensions of Uniform Cubic B-Spline Curve with Local Shape Parameters
Xu Gang and Wang Guozhao
2007, 44(6):  1032-1037. 
Asbtract ( 399 )   HTML ( 0)   PDF (367KB) ( 486 )  
Related Articles | Metrics
The construction of B-spline curves with shape parameters has become the hotspot in computer aided geometric design. However, the shape parameters of the curves in previous papers are all global parameters. In order to introduce B-spline curves with local shape parameters, two classes of polynomial blending functions with local shape parameters are presented in this paper. Both of them are extensions of cubic B-spline basic functions. The blending functions have similar properties of classical cubic B-spline basic functions. Based on the given blending functions, a method of generating piecewise polynomial curves with local shape parameters is proposed. By changing the values of the local shape parameters, the shape of the curves can be manipulated locally. The cubic curves can be manipulated to approximate the cubic B-spline curves from their sides away from the control polygons by changing the values of the shape parameter. Similarly, the quartic curves can also be manipulated to approximate the cubic B-spline curves from their both sides by changing the values of the shape parameters. The geometric meanings of the local shape parameters are also discussed. Their applications in curve design and interpolation are also presented. The modeling examples illustrate that these new curves are very valuable for computer aided geometric design.
A Progressive Geometry Simplification Method for Mobile Computing Terminal
Luo Xiaonan, Lin Mouguang, Ji Changbo, and Li Zhiyong
2007, 44(6):  1038-1043. 
Asbtract ( 425 )   HTML ( 1)   PDF (347KB) ( 497 )  
Related Articles | Metrics
This paper focuses on a progressive geometry simplification method for the mobile computing terminals. Mobile 3D graphics on mobile terminals is the significant ongoing research. By considering the limitations of the mobile terminals, such as small display panel, low processor performance, narrow wireless bandwidth, the investigation of the progressive display of mobile 3D graphics is highly necessary. An iterative simplification method based on the reverse Kobbelt subdivision scheme is proposed to give a solution to displaying 3D graphics on mobile computing terminals. The proposed simplification method progressively simplifies the quadrangle mesh models, by iteratively dividing a mesh model into odd vertexes and even vertexes, and saving the even vertexes as a sparse model. Furthermore, by iteratively using odd vertexes as details and adding them to the sparse model, a progressive display pattern is introduced to support progressive display of LOD (level of detail) models and lossless reconstruction of the original mesh models. The new simplification method is compact and fast, and it can implement progressive display of the mobile 3D graphics efficiently. The experimental results on PDA Mio 336 show that this research will have a wider prospect in the mobile 3D graphics field, such as real-time interaction on mobile computing terminals.
User-Driven Requirements Elicitation Method with the Support of Personalized Domain Knowledge
Shu Fengdi, Zhao Yuzhu, Wang Jizhe, and Li Mingshu,
2007, 44(6):  1044-1052. 
Asbtract ( 446 )   HTML ( 0)   PDF (539KB) ( 609 )  
Related Articles | Metrics
The requirements elicitation of application software is tightly involved with the application domain and faced with many non-technological issues. It is more and more difficult for requirements engineers to explore and understand the problem space and to get what users really need and expect solely by traditional methods and techniques, especially for those application systems that involve rich domain-specific knowledge and have multiple users playing multiple roles with complex organizational structure. Users' involvement is emphasized, while users always have difficulties in delivering their requirements clearly and completely due to their vague and partial understanding of the system. Although the reuse of domain knowledge is helpful, its effectiveness is limited by the ignorance of each user's individuality and specific local concern of the system as well as the vastness of domain knowledge. Besides, users' cooperation has not been considered enough. A novel user-driven requirements elicitation method is proposed. Based on users' individualities and context, support for users to present requirements is provided, including the recommendation of domain assets and advice about multi-user cooperation. The application of the method shows the method is feasible and provides positive effects on requirements elicitation through promoting users' participation and improving domain knowledge reuse.
DNA Computing Model Based on a New Scheme of Encoding Weight for Chinese Postman Problem
Han Aili, and Zhu Daming
2007, 44(6):  1053-1062. 
Asbtract ( 345 )   HTML ( 0)   PDF (584KB) ( 535 )  
Related Articles | Metrics
Weight encoding method is an important but challenging issue in DNA computing. A new DNA encoding scheme to represent weights on a weighted graph is devised inthis paper and a corresponding DNA algorithm for the Chinese postman problem is proposed using the encoding scheme. The random procedure of generating various paths in the DNA algorithm is analyzed by means of Markov chain. For any weighted graph G=(V,E), it is firstly converted into its general edge graph G′=(V′,E′) through mapping from edges to vertices. Each edge e\-i in G is mapped as a vertex v′\-i in G′, and the vertices v′\-i and v′\-j in G′ are linked if e\-i and e\-j are adjacent in G. If v\-i in G is an odd degree vertex, the self-loops are added to the vertices in G′ which are mapped from the edges linked to v\-i. The length of DNA strand s\-i, which is used to encode vertex v′\-i, is equal to the value of weight on edge e\-i. The DNA strand s\-\{ij\}, which is used to encode edge v′\-iv′\-j, is the reverse complement of the second half of s\-i and the first half of s\-j. The proposed DNA encoding scheme has characteristics of easy encoding, easy generalizing and low error rate. This work improves the capability of representing and dealing with data and extends the range of solving numerical optimization problems in DNA computing.
Improved Molecular Solutions for the Knapsack Problem on DNA-Based Supercomputing
Li Kenli, Yao Fengjuan, Li Renfa, and Xu Jin
2007, 44(6):  1063-1070. 
Asbtract ( 400 )   HTML ( 0)   PDF (489KB) ( 665 )  
Related Articles | Metrics
The DNA-based supercomputation has solved hard computational problems such as NP-complete problems in polynomial increasing time by using its superparallel and high-density power. However, almost all the current DNA computing strategies are based on the enumerative method, which causes the size of the initial DNA strands to increase exponentially. How to decrease the number of DNA strands increasing exponentially in these applications is very important in the research on DNA computers. For the objectivity of solution of the knapsack problem which is a famous NP-complete problem with DNA computer, the strategy of divide-and-conquer is introduced into the DNA-based supercomputing and a DNA algorithm is proposed. The proposed algorithm consists of an n-bit parallel subtracter, an n-bit parallel searcher, and other four sub-procedures. It is demonstrated that the proposed algorithms can first reduce the number of DNA library strands from O(2\+q) to O(2\+\{q/2\}) to solve a q-dimension knapsack instance, while keeping the operation time not obviously changed. It is shown that the traditional technology can still bear importance even in designing DNA computer algorithm. Furthermore, this work indicates that the cryptosystems using public key are perhaps insecure, because, theoretically, the 120-variable knapsack public key can be easily broken provided that the technology of DNA computers is mature.
Granary: An Architecture of Object Oriented Internet Storage Service
Hu Jinfeng, Hong Chunhui, and Zheng Weimin
2007, 44(6):  1071-1079. 
Asbtract ( 479 )   HTML ( 0)   PDF (492KB) ( 408 )  
Related Articles | Metrics
Granary is a public storage service on the Internet that has two distinguished goals compared with previous projects. First, it is object-oriented, and thereby supports attribute-level data query. Second, it is very flexible to the system environment, i.e., it can be deployed in a grid-like environment, a peer-to-peer-like environment, or even a compromised one. Presented in this paper are Granary's architecture as well as some of its significant components that are designed in adherence to these two goals, including the node-collection protocol PeerWindow, the routing infrastructure Tourist, and the object-index management algorithm PB-link Tree. An implementation of Granary has been developed and will be deployed in China Grid.
Research of CRAM's Architecture and Scheduling Algorithm of LS CSIMD
Zhou Guochang and Shen Xubang
2007, 44(6):  1080-1087. 
Asbtract ( 370 )   HTML ( 0)   PDF (476KB) ( 333 )  
Related Articles | Metrics
Dynamically reconfigurable architectures are emerging as a viable design alternative to implement wide range of computationally intensive applications. The architecture of configuration memory and context (configuration) management are very critical issues in achieving the high performance of dynamically reconfigurable architectures. An architecture of configuration memory (CRAM) that can be extended accessing space (configuration instructions adopt immediate addressing mode) is proposed by deeply researching on the architecture of LS CSIMD in this paper. This architecture of CRAM, which has multiple contexts, can greatly reduce reconfiguration latency by increasing capacity of CRAM. For CRAM's architecture, a novel scheduling algorithm based on the frequency and capacity of task kernels is proposed in subtopic 3. According to the character of reconfigurable task sequence, this algorithm can dynamically compute the capacity of the static region CRAM, and then the algorithm decides the locations of task kernels in static or dynamic region. So, the amount of configuration data loaded repeatedly into the CRAM is effectively decreased, and reconfiguration latency is reduced. The experimental results (subtopic 4) show that the time complexity of the scheduling algorithm is about O(n) in best condition and the time complexity is about O(n·log2n) in other conditions. When there are more task kernels and less difference among kernels capacities, the results are optimal.