Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 December 2007, Volume 44 Issue 12
Paper
Chaos Triangle Compliant Location Reference Node Selection Algorithm
Sun Peigang, Zhao Hai, Han Guangjie, Zhang Xiyuan, and Zhu Jian
2007, 44(12):  1987-1995. 
Asbtract ( 511 )   HTML ( 1)   PDF (472KB) ( 723 )  
Related Articles | Metrics
Positioning service is one of the basic services required by practical application of ubiquitous computing and how to obtain location information of an unknown node precisely is a key problem of positioning service in ubiquitous computing. However, positioning error is inevitable due to various potential errors caused by imprecise measuring instruments, improper measuring methods, etc. Firstly, a new error convergence theorem is presented and proved about how to reduce positioning error rapidly. The theorem is composed of three sub-theorems which indicate respectively how to make the smallest initial positioning error, how to reduce the initial location error more quickly by topological replication of reference nodes, and how to converge the initial location error to obtain minimal location error. Secondly, with a view of the actual application in ubiquitous computing, the optimal computing unit of reference nodes selection is proposed and the location reference node selection algorithm is put forward using topological duplication according to chaos triangles based on the presented error convergence theorem. Performance analysis and simulation experiments indicate that the location reference node selection algorithm is more suitably applied in resource-constrained environment of ubiquitous computing with less system cost and faster positioning error convergence than the traditional polygonal positioning algorithm at the same location accuracy.
A Complete Frequency Lossless Watermarking Method via Bandelet and Adaptive Matrix Norm
Yang Yuexiang, Luo Yong, Ye Zhaohui, and Cheng Lizhi
2007, 44(12):  1996-2003. 
Asbtract ( 330 )   HTML ( 0)   PDF (490KB) ( 550 )  
Related Articles | Metrics
The watermark is the new technology which can be used to apply to authenticate the image copyright and offer the safety. The traditional watermark protects the digital image through modifying the image data to hide the information which is used to authenticate the image copyright. It does not suit protecting some images which are not permitted to be destroyed. The lossless watermarking is an effective method to protect these image data. A complete frequency lossless watermarking method with which the image data need not be changed is proposed in this paper. Wavelet transform is first performed for the original image, and then for the high and middle frequency part of the image transformed, the geometric flow is traced by using Bandelet, and the texture and edge considered for feature parameters of the image are used to construct high frequency lossless watermarking. For the low frequency part of wavelet coefficients, by selecting optimal matrix norm a new watermarking scheme is then proposed. Through drawing the statistical character and the edge of the image, the method is able to protect the image from all-around attack. The experimental tests show that the proposed approach has the capability to resist strong attacking,and it can be widely used in protecting data which are not permitted to be modified.
Optimal Path Based Geographic Routing in Ad Hoc Networks
Yu Kun, Wu Guoxin, Xu Libo, and Wu Peng
2007, 44(12):  2004-2011. 
Asbtract ( 380 )   HTML ( 1)   PDF (437KB) ( 461 )  
Related Articles | Metrics
Routing in ad hoc networks is still a difficult problem by now because of the energy and bandwidth restriction. Using location information, routing can only be based on local information. But concave problem often emerges in the geographical routing algorithms. By analysis of concave-node problem in geographic routing in ad hoc networks, a new algorithm called PGA as well as its improveed versions. In this method, the concept of optimal path is used in all phase of routing, including path construction, routing based on optimal path and routing recovery. The mechanism is very effective to handle the concave node problem. The routing needs only some local path information. The path construction and routing algorithm can be proved to no loop, i.e. the algorithm guarantees the routing can't cause loop path, so the routing is stateless. All of them lead to the algorithm scalability and maintainability. Experiments demonstrate that even in large network, the algorithm is able to achieve high packet delivery success rate, short path length, acceptable routing size and controllable protocol communication cost. In the meanwhile, the routing algorithm is robust enough in dynamic environments. So it can be widely applied in ad hoc or sensor networks.
A Network Security Analysis Model Based on the Increase in Attack Ability
Zhang Haixia, Su Purui, and Feng Dengguo
2007, 44(12):  2012-2019. 
Asbtract ( 442 )   HTML ( 1)   PDF (505KB) ( 925 )  
Related Articles | Metrics
In recent years, network vulnerability analysis, which is attracting more and more domestic researchers and foreign researchers, has become a hotspot in the field of information security. A new model of network security analysis based on the increase in attack ability is proposed. It takes into account the network environment and simulates the attacker's behavior, and considers improving the attack ability as attacker's ultimate target to generate attack graph. The method used to represent attack graph make the attack target more clear, because it uses the attack ability's increment to describe a goal, which is more accurate than the attack ability itself. The minimum attack cost analysis considers the influence of similar attacks to compute the cost of each path for the first time, which conforms to the actual process of attack execution. The minimum environment change analysis can help people find out which attack path is most likely to be adopted by the attacker, which deals with IDS in a more reasonable way. These two analysis methods are helpful for improving the network configuration. The algorithm of attack graph generation and the method to analyze the attack graph proposed by the network security analysis model is more feasible than the existing ones.
An Administrative Model for Role-Based Access Control Using Hierarchical Namespace
Xia Luning and Jing Jiwu
2007, 44(12):  2020-2027. 
Asbtract ( 411 )   HTML ( 2)   PDF (427KB) ( 582 )  
Related Articles | Metrics
Access control is an important information security mechanism. Role-based access control is a famous access control approach with good flexibility and expandability. The classical RBAC models are RBAC96 and ARBAC97. The ARBAC97 model is an administrative model with the idea of “using RBAC to administrate RBAC”. It facilitates decentralized administration of RBAC through three assignment models: URA97, PRA97 and RRA97. Though ARBAC97 works well in traditional RBAC applications, it has some shortcomings if employed in a large organization composed of many autonomous subsidiaries. The member of an administrative role can operate directly in the role range of a junior administrative role, which violates the autonomy of subsidiaries. The authorization relationship is rather complex. And the names of the roles have to be globally unique. A new administrative model named N-RBAC is proposed to overcome these weaknesses. In N-RBAC, all resources (including users and roles) are arranged into a hierarchical namespace structure. Thus the role hierarchy is constructed in a local space instead of in a global space. The administrative role hierarchy is obsolete, and a unique administrative role is assigned to each namespace instead. Experimental results show that the N-RBAC model is more suitable to autonomous distributed role administration than the ARBAC97 model.
The Correlation Determine Algorithm for Implied Restriction
Bao Xuhua, Dai Yingxia, Lian Yifeng, and Zhu Pengfei
2007, 44(12):  2028-2035. 
Asbtract ( 284 )   HTML ( 2)   PDF (541KB) ( 392 )  
Related Articles | Metrics
With the development of the network in the scale and the bandwidth, security issues have become more and more complex and the requirement for correlation technology is rapidly increased. The causal correlation is one of the most popular correlation methods, whose basis is the judgment method for relation between two alerts. In this paper, a formal description for general causal correlation is given, which presents some limitations in the conventional approaches. Then the difficulty in correlation with implied restriction is analyzed, and some cases about this restriction and solutions are discussed. Sometimes an alert occurs for the duration of time, therefore how to distinguish the order for two alerts becomes mysterious, which is the problem about time restriction. In real world one host may have several interfaces, while an interface may have several addresses, and which type of problems may result in the location restriction. In the whole history of the modern OS, the issue of the access control is an important role, and the complex relation during subject, object and privilege is the most difficult part for correlation of two alerts, which involves access control restriction. Finally, a new correlation determine algorithm for implied restriction (CDAIR) is proposed, which solves these problems for the time restriction, the location restriction and the access control restriction. Also given are the process and the result of the corresponding experiment which proves the validity of the algorithm.
Automatic Chinese Visual Human Image Segmentation in HSV Space
Chen Yunjie, Zhang Jianwei, Wei Zhihui, Heng Pheng Ann, and Xia Deshen
2007, 44(12):  2036-2043. 
Asbtract ( 488 )   HTML ( 0)   PDF (662KB) ( 575 )  
Related Articles | Metrics
The Chinese Visible Human Project Research Team from The Third Military Medical University has successfully collected several Chinese visible human(CVH) data sets since October 2002. With these images a learning medicine machine can be developed, but CVH image segmentation is one of the hardest tasks, especially in the brain images. The Chinese visual human brain images are analyzed in HSV color space, which can distinguish different areas in brain clearly. A new fuzzy anisotropic diffusion function is presented, which can diffuse the images while preserving the edge information. With the effect of the fake grey matters, which belong to grey matters located under current image and have similar color with true grey matters, it is hard to get exact grey matters. In order to get exact grey matters from brain images, the Ostu method and a new fast table method are presented to analyze the saturation information, hue information, and value information. Using these methods the grey matters can be segmented exactly. Anatomy information and region information are fused to analyze the intensity information and confirm the final results correctly. The experiments to segment the Chinese visual human brain images show that the method of this paper can obtain right results in an accuracy way.
A CPCA Principal Axes Corrected Approach Using the Direction of Maximum Normal Distribution
Wan Lili, and Zhao Qinping
2007, 44(12):  2044-2050. 
Asbtract ( 374 )   HTML ( 0)   PDF (431KB) ( 530 )  
Related Articles | Metrics
PCA-alignment only captures the relative location of 3D models' surface vertexes, and thus does not provide robust rotation normalization. In order to get more reasonable axes for the rotation normalization of 3D models, an approach to correct the principal axes by CPCA approach is proposed in this paper, using maximum normal distribution. The approach is called MNCPCA (maximum normal corrected PCA) for short. Two appearance's attributes which are vertexes position coordinates and normal directions are considered in the approach. To improve the robustness of the algorithm, firstly a group of referenced normals which are obtained by uniform sampling is used to compute normal distribution histogram, in order to avoid error in numerical calculation and incorrect normal; next normal distribution characteristic of 3D models is analyzed, judging whether the 3D model has an obvious maximum normal distribution by the normal distribution histogram. And then, based on the analysis, a strategy to correct CPCA principal axes is presented, only adjusting the principal axes for some 3D models which have obvious maximum normal distribution, and for the other models, the principal axes by CPCA is still unchanged. Experimental results show that the MNCPCA approach can get more reasonable axes than CPCA approach by human cognition.
Fingerprint Ridge Orientation Estimation Based on Machine Learning
Zhu En, Yin Jianping, Zhang Guomin, and Hu Chunfeng
2007, 44(12):  2051-2057. 
Asbtract ( 727 )   HTML ( 0)   PDF (846KB) ( 602 )  
Related Articles | Metrics
Fingerprint recognition is a method for biometric authentication. Fingerprint image consists of interleaving ridges and valleys. Ridge termination and bifurcation, uniformly called minutia, are generally used for fingerprint matching. Automatic fingerprint recognition typically goes through a series of processes, including ridge orientation estimation, segmentation, enhancement, minutiae detection and matching. Ridge orientation is one of the fundamental features of a fingerprint image. And orientation estimation is the basis of fingerprint recognition, since it serves for segmentation, enhancement, minutiae extraction and matching. Most existing orientation estimation methods are based on the characteristic of pixel intensity in a block. In this paper neural network is used to learn the ridge orientation. At the training stage, the correct orientations are fed into the network as positive samples, and the incorrect orientations are fed into the network as negative samples. The trained network has the property of responding to true ridge orientation with a large value and of responding to the false ridge orientation with a small value. When estimating fingerprint ridge orientation, the responded values to each orientation at each image block are used to compute the fingerprint orientation field. The proposed method turns out to be more robust than the existing method.
Hierarchical Obstacle Avoidance for Crowd Simulation
Wang Jie, Wang Zhaoqi, Li Chunpeng, Mao Tianlu, and Xia Shihong
2007, 44(12):  2058-2065. 
Asbtract ( 324 )   HTML ( 1)   PDF (436KB) ( 480 )  
Related Articles | Metrics
Obstacle avoidance is an extremely challenging issue faced by crowd simulation. Many different approaches have been proposed with the aim of avoiding collision and eliminating intersection artifacts between two agents or between agent and environment. But most of these methods can not guarantee against overlapping, while other methods which can ensure no overlap have many space restrictions on agent's behavior or may cause visually unpleasant artifacts. To solve this problem, hierarchical obstacle avoidance focuses on a three levels obstacle avoidance method which contains two-level obstacle avoidance behavior and an intersection elimination level. The two-level obstacle avoidance behavior is used for avoiding collision during the behavior planning and execution while the intersection elimination level is employed to adjust overlapping after coarse renewed positions are calculated for agents. The two-level obstacle avoidance behavior, including static obstacle avoidance and dynamic obstacle avoidance, extremely reduces the complexity of the situation which should be considered in the planning and execution of agent's behavior by dividing the objects into two kinds: static obstacles and dynamic obstacles according to their attributions and making use of separate obstacle avoidance method to avoid collision. The intersection elimination level, based on agent's variable bounding box and its previous position, eliminates intersection artifacts absolutely in every update period without space restrictions and visual artifacts.
Fast Fractal Image Coding Based on Quincunx Sums of Normalized Blocks
He Chuanjiang, Liu Weisheng, and Shen Xiaona
2007, 44(12):  2066-2071. 
Asbtract ( 381 )   HTML ( 0)   PDF (368KB) ( 505 )  
Related Articles | Metrics
Fractal image coding can provide a high decoded image quality at a high compression ratio, but it requires traditionally a very long runtime in the encoding process. Therefore, it is essential to develop fast encoding algorithms before it could be widely used for various applications. The encoding time is mostly spenton searching for the best-matched block to an input range block in a usually-large domain pool; a new scheme is thus proposed to limit the search space in this paper. It first defines a new feature, quincunx sum, which is the intensities sum of a normalised image block over each corner of the block and one at the centre. Then, the quincunx sum is utilized to confine efficiently the search space to the vicinity of the initial-matched block (i.e., the domain block having the closest absolute quincunx sum to that of the input range block being encoded). Experimental results show that this method can reduce drastically the amount of range-domain comparisons needed to encode each range block. The proposed algorithm has also been compared with the fast fractal encoding algorithm based on cross-trace, showing that under the same search neighborhood it performs better in terms of encoding time, image quality and compression rate.
A New Supervised Manifold Learning Method
Meng Deyu, Xu Zongben, and Dai Mingwei
2007, 44(12):  2072-2077. 
Asbtract ( 582 )   HTML ( 0)   PDF (379KB) ( 654 )  
Related Articles | Metrics
A new supervised manifold learning method is proposed in this paper, in order to present a new strategy to efficiently apply manifold learning and nonlinear dimensionality reduction methods to supervised learning problems. The new method realizes efficient supervised learning mainly based on integrating the topology preserving property of the manifold learning methods (Isomap and LLE) and some prominent properties of support vector machine such as efficiency on middle and small sized data sets and essential capability of support vectors calculated from support vector machine. The method is realized via the following steps: first to apply Isomap or LLE to get the embeddings of the original data set in the low dimensional space; then to obtain support vectors, which are the most significant and intrinsic data for the final classification result, by using support vector machine on these low dimensional embedding data; subsequently to get support vectors in the original high dimensional space based on the corresponding labels of the obtained low dimensional support vectors; finally to apply support vector machine again on these high dimensional support vectors to gain the final classification discriminant function. The good performance of the new method on a series of synthetic and real world data sets verifies the feasibility and efficiency of the method.
A Mini-Conflict Repair Based Algorithm for Solving Dynamic Constraint Satisfaction Problems
Sun Jigui, Gao Jian, and Zhang Yonggang
2007, 44(12):  2078-2084. 
Asbtract ( 508 )   HTML ( 1)   PDF (431KB) ( 583 )  
Related Articles | Metrics
Constraint satisfaction problems (CSPs) is an important research branch in artificial intelligence. Recently, dynamic CSP is proposed as a powerful tool for solving many real-world problems on dynamic environments. As a result, several algorithms to solve dynamic CSPs are presented. Among those algorithms, local change (LC) algorithm based on solution reuse strategy is a method for solving many kinds of dynamic CSPs and efficient for flexible planning. On the basis of LC algorithm which is widely used, the tabu search strategy is integrated and a mini-conflict repair based algorithm is proposed, which is called Tabu_LC. The improved algorithm considers all the conflict variables as a whole, and then solves the sub-problems with branch and bound algorithm to find the best neighbor assignment, which improves the efficiency markedly. Furthermore, the Tabu_LC algorithm is implemented in the framework of constraint solving system “Ming-yue 1.0”, and compared with the LC algorithm using large amount of random CSPs. The experiment indicates that the improved algorithm has overwhelmed the LC algorithm on both the efficiency and quality of solutions.
Segmentation and Recognition of Handwritten Chinese Day String
Zhang Chongyang, Xu Yong, Lou Zhen, and Yang Jingyu
2007, 44(12):  2085-2091. 
Asbtract ( 458 )   HTML ( 1)   PDF (458KB) ( 535 )  
Related Articles | Metrics
Segmentation and recognition of off-line handwritten Chinese character string is a difficult task in the research field of character recognition. A standard way for character string recognition is to segment a string into isolate character, then compos their recognition results into words or strings. The purpose of segmentation is to reduce the pattern classes which are to be sent to the recognition engines. However, recognition failure caused by segmentation line missing, non character patterns and unreliable recognition scores. To recognize the Chinese day strings on check images, a rule based method is proposed. It recognizes date strings by combining a holistic method and a segmentation-recognition based method. The holistic method recognizes the whole string as a single character without segmentation. The segmentation-recognition based method first finds as much candidate segmentation lines as possible by projection and structure analysis. Then, it reduces segmentation lines by a predicted string length. Finally, the best recognition result is selected by recognition scores. Experiments have been done on 5569 real life check images collected from Chinese bank. The experiment results demonstrate the efficiency of the proposed method. The string recognition rate has achieved 93.3% on the 1932 test strings.
A Heuristic Algorithm for the Unequal Circle Packing Problem
Chen Mao, and Huang Wenqi
2007, 44(12):  2092-2097. 
Asbtract ( 487 )   HTML ( 4)   PDF (329KB) ( 561 )  
Related Articles | Metrics
Circle packing problem, one of the NP hard problems, is of great theoretical and practical value. To solve the circle packing problem that encounters in the field of transportation of freight, a heuristic algorithm is proposed for finding a good arrangement of multiple different-sized circles within a larger rectangular container. In this algorithm, the circles are sorted by non-increasing order of radius and packed into the container one by one according to the order. Each circle should be placed inside the container by a corner placement so that the circle does not overlap any other circle and is tangent with two previously packed circles. By pseudo-placing one or more circles to be packed, two greedy methods are introduced to evaluate the benefit of a corner placement, one of which is the degree of placement, and the other is a bounded enumeration strategy that is based on the first one. At each iteration of packing in the resulting algorithm, each circle is packed into the container by a corner placementwith the highest benefit according to the bounded enumeration strategy. The experimental results are presented, showing the effectiveness of this algorithm.
DHMC:An Improved Parallel & Distributed Storage Structure for High-Dimensional Cube
Hu Kongfa, Chen Ling, Zhao Maoxian, Da Qingli, and Ji Zhaohui
2007, 44(12):  2098-2105. 
Asbtract ( 444 )   HTML ( 0)   PDF (553KB) ( 456 )  
Related Articles | Metrics
Data cube and its pre-computation have been playing an essential role in fast OLAP (online analytical processing) in many data warehouses. For the cube with d dimensions, it can generate 2\+d cuboids and ∏di=1(|D\-i|+1) aggregate cells. But in a high-dimensional cube, it might not be practical to build all these cuboids. In this paper, a novel parallel and distributed storage structure is proposed for high-dimensional cube based on shell segment mini-cubes (DHMC). DHMC partitions the high dimensional cube into some low-dimensional shell segment mini-cubes. OLAP queries are computed online by dynamically constructing cuboids from these shell segment mini-cubes through the parallel & distributed processing system. With this design, for high-dimensional OLAP, the total space that needs to store such shell segment mini-cubes is negligible in comparison with a high-dimensional cube. Such an approach permits a significant reduction of CPU and I/O overhead for many queries by restricting the number of cube segments to be processed for both the fact table and bitmap indices. The proposed data allocation and processing model supports parallel I/O and parallel processing, as well as load balancing for disks and processors. The methods of shell mini-cube are compared with other existing ones such as full cube and partial cube. The analytical and experimental results show that the algorithms of DHMC proposed are more efficient than the other existing ones.
The Sufficient and Necessary Condition for No Implicit Redundancies in an XML Schema
Wu Yonghui
2007, 44(12):  2106-2111. 
Asbtract ( 376 )   HTML ( 0)   PDF (312KB) ( 395 )  
Related Articles | Metrics
XML has emerged as a standard for data representation and interchange on the Internet. Normally the first step for building an XML application is XML database schema design. Normalization design of XML database schema is generating a set of related XML schemas or DTDs that can represent data dependencies and eliminate redundancies, in order to make information retrieval better. The reason why redundancies exist in an XML database schema is that there exist some data dependencies in it. So relationship between data dependencies and redundancies in an XML database schema is a key problem for researches on its normalization design. But there is no research on it now. Data dependencies in an XML database schema consist of data dependencies among attributes and among elements. Data dependencies in an XML database schema by synthesizing data dependencies among attributes and among elements are defined, implicit redundancies relating to it are analyzed, and semi-normal XML schema and normal XML schema are defined based on it. Then the sufficient and necessary condition that no implicit redundancies exist in an XML schema iff the XML schema is normal is proved. This work lays a theoretical foundation for the further research on normalization design of XML database schema.
A Semantic Description Model of Features for Service Component Using OCL
Jin Xianli and Ma Huadong
2007, 44(12):  2112-2121. 
Asbtract ( 363 )   HTML ( 0)   PDF (541KB) ( 497 )  
Related Articles | Metrics
The semantic features of component are the foundation for component retrievals, and are the emphases of research for component repository. However, since there are a large amount of dynamic distributed service components in the network system, many components may have some kind of relationship or dependence with each other. As a result, how to efficiently describe the relationship between the features of different service components is a vital problem for the components management model, which needs to be further investigated. In this paper, a feature-based semantic description model for component is proposed. First, the presentation models of the feature, the feature space and the component feature space are defined respectively. Then, the association and dependency relations among the features of the components are classified into four groups: self constraint, paternity constraint, dominant constraint and recessive constraint. These four kinds of constraint are formally described by the object constraint language, which provides an accurate semantic support for such model. In order to describe the approach's application a component sub-tree based on features for e-business is studied as a case. The results of the model checking and experiments in the object constraint language environment prove that this model is correct and valid.
A P2P-Based Component Library Interconnection Technique Supporting Query Refactoring
Li Yan, Li Tian, Xie Bing, Zhang Lu, and Sun Jiasu
2007, 44(12):  2122-2129. 
Asbtract ( 266 )   HTML ( 3)   PDF (408KB) ( 380 )  
Related Articles | Metrics
Software reuse is a feasible way to solve the software crisis. With the development of software reuse techniques and network techniques, more and more component libraries emerge on the Internet. However, the components that the reuser needs usually distribute in multiple libraries, and the ways of component description in those libraries are different. This makes the acquirement of components quite difficult. Thus, it is necessary to provide reusers with an effective mechanism to help them acquire components from multiple component libraries. Proposed in this paper is a component library interconnection technique called DCLITTA which supports resource sharing among distributed component libraries and supplies a ‘transparent’ retrieval mechanism to reusers. DCLITTA organizes independent component libraries in a flexible way by leveraging the peer to peer (P2P) network architecture. Meanwhile, to deal with the differences in component description models among component libraries, DCLITTA refactors reusers' queries automatically to improve the retrieval effect. Based on the technique introduced in this paper, the authors designed and implemented the supporting system which has already been put into practical use in the component libraries in Beijing and Shanghai software parks.
Structural Properties of Parallel Program's Petri Net Model
Cui Huanqing and Wu Zhehui
2007, 44(12):  2130-2135. 
Asbtract ( 416 )   HTML ( 0)   PDF (383KB) ( 400 )  
Related Articles | Metrics
High-performance computing based on parallel programming is used in many fields, and correctness is the basis of parallel programs, but parallel programs are more difficult to verify than the sequential programs because of their complexity. Thus it is necessary to model them. Petri net is often used to model the parallel program, and most research work verifies the program from the point of view of Petri net. In this paper, the message-passing parallel program is transformed into Petri net model firstly, and the structural properties of it are studied on the program's ground. It is proved that for a concurrent correct parallel program's model, its Petri net model is strongly connected and satisfies controlled siphon property, each of its process-subnets is conservative, and each of its transitions belongs to a support of T-invariant. Two examples, one is a simple point-to-point non-blocking communication and the other is a producer-consumer system, are given to show the applications of these properties in verification of the parallel program. These properties can be used in beforehand verification of the parallel program, and this method avoids state explosion caused by verification with dynamical properties. Thus it can improve the efficiency of program design and verification. This method can be generalized easily.
Cluster-Based Edge Streaming Server with Adaptive Load Balance in Mobile Grid
Chai Yunpeng, Gu Lei, and Li Sanli
2007, 44(12):  2136-2142. 
Asbtract ( 420 )   HTML ( 0)   PDF (416KB) ( 482 )  
Related Articles | Metrics
With the fast development of 3G network, and the gradual popularity of Wi-Fi which can cover the whole city area, the requirement of multimedia service in mobile grid has been growing significantly in recent years. Because the streaming media service on the Internet is the most important supplier for mobile multimedia users in mobile grid, the edge streaming servers which locate between the boundary of wireless network and the Internet, can act as bridges and buffers to achieve the notable effect on reducing the load of the Internet and improving the QOS of mobile multimedia service. Accordingly, a new design of cluster-based edge streaming server (CESS) is introduced; in the meanwhile, aiming at solving the most important problems of CESS, namely loading balance, some analysis and experiments are carried out. On the basis of such work, a new kind of caching replacement algorithm called MCLBS is brought out to make CESS an adaptive loading balance system. Finally, the experiment results and related analysis show that in comparison with the traditional cache replacement algorithm, the MCLBS algorithm which is more suitable for cluster server architecture makes the cache-hit-rate increase significantly and greatly reduces remote server's bandwidth requirements in the same environment.
VLSI Design and Implementation of a High-Speed Viterbi Decoder
Li Qing, Deng Yunsong, Zeng Xiaoyang, and Gu Yehua
2007, 44(12):  2143-2148. 
Asbtract ( 318 )   HTML ( 0)   PDF (344KB) ( 1070 )  
Related Articles | Metrics
Based on an optimized structure, a high-speed (2,1,7) Viterbi decoder with trace-back length of 64 is presented in this paper. Considering the punctured convolutional codes for Viterbi decoding and the hardware complexity of its implementation, a modified ACS (add-compare-select) unit is used to satisfy its decoding requirements and reduce its hardware complexity. Also, a parallel structure is adopted to meet the working speed requirements but does not increase its hardware complexity. In order to increase its decoding throughput rate, the decoder employs blocked cyclic memory, which is composed of register file that can help reduce the implementation size of the decoder. A new trace-back unit is introduced to improve the way of data reading and writing for trace-back. Implemented by SMIC 0.18μm standard CMOS technology, its hardware scale is about 28 683 gates (2 input NAND is counted as a gate), and the highest speed is about 180MHz. Compared with other reported schemes, the performances of this proposed Viterbi decoder are better in terms of implementation size, throughput rate and constraint length. With the above excellent performances, the proposed Viterbi decoder is very suitable to be applied in the field of digital communication, which needs high throughput rate and small implementation size, such as DTV and HDTV.