Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 July 2005, Volume 42 Issue 7
Paper
A Survey of Digital Watermarking
Yin Hao, Lin Chuang, Qiu Feng, and Ding Rong
2005, 42(7):  1093-1099. 
Asbtract ( 675 )   HTML ( 7)   PDF (371KB) ( 2095 )  
Related Articles | Metrics
Digital watermarking, the technology of embedding special information into multimedia data, is a topic that has recently gained increasing attention all over the world. The watermark of digital images, audio, video, and other media products in general has been proposed for resolving copyright ownership and verifying the integrity of content. The characteristics and applications of watermark techniques are first introduced, and then the basic concepts and evaluation criteria are expatiated. For further understanding, the watermark techniques from the various aspects are classified and some conventional watermark techniques and algorithms are analyzed in detail. At the same time, their security and performance are compared. Finally, the possible research direction of digital watermark technology are pointed out.
A New Taxonomy of Attacks on Security Protocols and Their Security Evaluation
Zhuo Jiliang, Li Xianxian, Li Jianxin, and Huai Jinpeng
2005, 42(7):  1100-1107. 
Asbtract ( 624 )   HTML ( 0)   PDF (362KB) ( 1273 )  
Related Articles | Metrics
Security analysis and evaluation for security protocols are very important, yet it is usually hard to carry out. Almost all the existing research work concentrates on analyzing certain security properties of protocols on the open network environment, such as secrecy and authentication. To evaluate security protocols on capabilities of defending attacks more comprehensively, the classification of intruders' abilities is studied, and a new taxonomy of attacks on security protocols based on both intruders' capabilities and attack consequences is also presented. With the classification, the corresponding characteristics and mechanisms of every attack type are analyzed. Finally, a security evaluation framework for security protocols based on the 2-dimensions taxonomy is discussed, which helps to objectively evaluate capabilities of preventing attacks on security protocols and also helps to design new security protocols.
A DAG-Based Security Policy Conflicts Detection Method
Yao Jian, Mao Bing, and Xie Li
2005, 42(7):  1108-1114. 
Asbtract ( 512 )   HTML ( 1)   PDF (422KB) ( 662 )  
Related Articles | Metrics
Policies are increasingly used in the field of security management. Security policies confliction is one of the most difficult problems in this field. The shortcoming of previous methods on security policies confliction detection is analyzed. Security policies are considered a kind of relation between subject and object about authority or obligation. Subjects and objects are elements in a distributed system. In researching relations among the elements in the distributed system, a conception of “field” is provided. The relations of fields can express the relations among the elements in the distributed system. A directed acycline graph model is given in order to precisely describe the relations of fields. A quantity method based on the model to detect security policy conflicts is then presented. A number of cases on security policy confliction are studied to prove the method correctness and availability. Finally, the algorithmic complexity is analyzed, which is in direct proportion to the number or square number of vertexes in the directed acycline graph. Data from experiments is also provided to support the conclusion. The way on security policy conflicts detection is extended and security policy practicability is provided.
Research on Network Service Architecture and Its Formal Model
Yang Peng and Wu Jiagao
2005, 42(7):  1115-1122. 
Asbtract ( 396 )   HTML ( 0)   PDF (371KB) ( 435 )  
Related Articles | Metrics
The essence of next generation network is that it is an information infrastructure which not only aims to support various information services, but also can sustain the complete informatization of politics, economy, culture, education and national defense of every country. The traditional network architecture layered only by the basic communication functions can no longer fit the requirements of next generation network. An interaction based network service architecture (INSA) is proposed in this paper, which can act as a reference model for next generation network service architecture. The whole structure and the functions of each layer of INSA are described in detail. An abstract formal model of next generation network service architecture is also proposed, which provides a new approach to describing, analyzing and verifying the service related characters of the next generation network service architecture.
Stability of an AQM Control Algorithm with Communication Delays
Yang Hongyong, Kong Xiangxin, and Zhang Fuzeng
2005, 42(7):  1123-1127. 
Asbtract ( 536 )   HTML ( 0)   PDF (310KB) ( 484 )  
Related Articles | Metrics
An AQM (active queue management) control algorithm is a complicated dynamical nonlinear feedback system, which is applied at the link node to control the congestion rates based on the network loadings from the source nodes. In order to investigate the influence of the communication delays on the quality of service (QoS) of the Internet, the stability of an AQM algorithm with communication delays is studied by applying the general Nyquist criterion. The stability criteria of the AQM scheme with the homogeneous delays and heterogeneous delays are obtained by analyzing the frequency function of the network system. These results exhibit that the communication delay is one of the key factors to affect the Internet capability and plays an important role in communicating process of the Internet. A computer simulation supports the validity of these stability criteria on the AQM congestion control algorithm.
A Fast Matching Algorithm and Conflict Detection for Packet Filter Rules
Tian Daxin, Liu Yanheng, Li Yongli, and Tang Yi
2005, 42(7):  1128-1135. 
Asbtract ( 1090 )   HTML ( 2)   PDF (452KB) ( 691 )  
Related Articles | Metrics
The bottleneck of the packet filter method is produced due to a large number of filter rules to be checked. A fast matching algorithm BSLT (binary search in leafs of tries) is presented, which is based on trie construction and only stores the matching rules in the leaf nodes, and thus it consumes less memory space. The space complexity is O(NW) where N is the number of filter rules, W is the maximum number of bits specified in the destination or source fields. Binary search is used in finding the matching rule in the leaf nodes, which speeds up matching. The complexities of both searching time and matching time are O(W) and O(N) respectively. Experimental performance evaluations show that the throughput of BSLT is about 20% greater than the sequence matching algorithm. Another problem, the rule conflict, is proved and a conflict detection algorithm is given. Experiments prove that the algorithm can detect the conflict correctly.
Active Queue Management Improving the Stability and Fairness
Tang Deyou, Luo Jiawei, Zhang Dafang, and Zhang Baini
2005, 42(7):  1136-1142. 
Asbtract ( 528 )   HTML ( 0)   PDF (489KB) ( 477 )  
Related Articles | Metrics
The IRTF has recommended the utilization of random early detection (RED), which uses the average queue length to determine congestion and dropping probabilities and drops packets randomly. But RED is unstable and can't protect traffic from flows that send more than average bandwidth or flows that fail to use end-to-end congestion control when the load is high. In this paper, it is shown a new scheme called early selective drop (ESD) that uses both load and average queue length to determine congestion and packet drop probabilities. The count of dropping probability is implemented by an exponential function. At the same time, ESD uses a virtual queue to record the per-active-flow state information and uses a method called pseudo-round-robin to filter out the candidate connections from those flows that occupy more bandwidth than average bandwidth. ESD distinguishes non-adaptive greedy traffic from adaptive traffic, short-lived flow from long-lived flow, long-RTT flow from short-RTT flow, and punishes non-adaptive flows when the network becomes congested. When it is time to drop a packet, ESD drops the candidate connection's packet in the front of the queue. The simulations demonstrate that ESD improves the stability of the queue and the fairness between different types of flows, decreases packet losses of WEB traffic and long-RTT flows, and reduces the application's response time.
CoPenML: An XML-Based Component Architecture for Pen-Based User Interface
Li Jie, Qin Yanyan, Tian Feng, and Dai Guozhong
2005, 42(7):  1143-1152. 
Asbtract ( 486 )   HTML ( 0)   PDF (538KB) ( 381 )  
Related Articles | Metrics
Pen-based user interface (Pen UI) is a primary style in the post-WIMP world and is commonly used in the ubiquitous computing environments. However, most current Pen UI technologies do not support a high-level, visual, multi-disciplinary authoring process. Moreover, concepts of reuse are rarely provided. A component-based approach is introduced with CoPenML architecture. The approach is entirely based on declarative XML documents, describing the component configuration and composition of Pen UI, as well as backend application logic. Another advantage of the approach is that component reuse is provided at both the system implementation level and the higher scene graph level. The overall architecture and XML-based implementation are described, especially the markup language specification for the UI profile documents. Finally, the associated authoring environment and tools involved are outlined, which shows the approach is practical and effective.
Non-Backtrace Backward Chaining Dynamic Composition of Web Services Based on Mediator
Liu Jiamao, Gu Ning, and Shi Baile
2005, 42(7):  1153-1158. 
Asbtract ( 424 )   HTML ( 0)   PDF (392KB) ( 616 )  
Related Articles | Metrics
Proposed in this paper is a non-backtrace backward chaining Web services dynamic composition framework based on mediator. Under this framework, a rule modeling method in composition and a parameter-level ontology for eliminating the composition semantic conflict are provided, and then a non-backtrace backward chaining algorithm is used to compose Web services. This composition approach avoids the optimization of composition plan, and its parameter-level ontology is more efficient in handling the semantic problems in composition than the attribute-level ontology in forward chaining composition approach and is suitable for composing many times to the same user inputs and the same Web services in order to generate the same outputs or different outputs.
Nested Knowledge Space Model and Awareness Processing in a Collaborative Learning Environment
Zhan Yongzhao, Wang Jinfeng, and Mao Qirong
2005, 42(7):  1159-1165. 
Asbtract ( 349 )   HTML ( 0)   PDF (389KB) ( 479 )  
Related Articles | Metrics
Arrangement and management for learning materials and awareness processing are key problems in the Web-based collaborative learning environment. In order to provide reasonable learning navigation and flexible awareness processing in a collaborative learning environment, a nested knowledge space model is presented, which includes concepts such as knowledge domain, four kinds of relations among knowledge domains and knowledge space, etc. Based on this model, a Web-based collaborative learning environment is established and learning materials in the system are organized and managed, which can help users to get their needed learning documents. To get rid of users' isolated feeling in the collaborative learning environment, a multi-levels awareness processing model based on the nested knowledge space model is presented, in which there are two awareness methods: the multi-levels awareness space method and the customized monitor method. In addition, information filter and privacy protector are provided, which can filter redundant information and protect users' privacies effectively. The prototype system shows that the methods can make learners find their learning goals and their materials easily and finish their learning quickly through awareness of other learners' information and mutual discussions.
A Hierarchical Topological Relations Model of Fuzzy Raster Regions
Yu Qiangyuan, Liu Dayou, and Wang Shengsheng
2005, 42(7):  1166-1172. 
Asbtract ( 426 )   HTML ( 0)   PDF (432KB) ( 454 )  
Related Articles | Metrics
The modeling of topological relations between spatial regions is a primary topic in spatial reasoning, geographic information systems (GIS) and spatial databases. In many geographical applications spatial regions do not always have homogeneous interiors and sharply defined boundaries, but frequently their interiors and boundaries are fuzzy. Recently, representing fuzzy spatial regions and modeling the topological relations between them plays an increasingly important theory and application role. Based on the characteristics of fuzzy regions in raster data model and the requirement of topological relations analysis in applications, a hierarchical topological relations model is proposed. The model can determine the topological relation between fuzzy raster regions on multiple levels with the values of three predications. When predicates are evaluated within two values, it can deal with crisp raster regions as a specific case and there are 5 ossible cases of topological relations. When predicates are evaluated within three values, there are 27 possible cases. There are 51 possible cases when predicates are evaluated within six values. In practical applications, the model can analyze topological relations of fuzzy raster regions according to the existing facts and the requirement. The model is wieldy in practical applications and achieves satisfactory results.
3D Left Ventricle Surface Reconstruction Based on Level Sets
Zhou Zeming, Wang Yuanquan, Pheng Ann Heng, and Xia Deshen
2005, 42(7):  1173-1178. 
Asbtract ( 488 )   HTML ( 0)   PDF (386KB) ( 477 )  
Related Articles | Metrics
A 3D left ventricle (LV) surface reconstruction algorithm is proposed by using 2D MRI slices. MRI slices can capture the boundary deformation in the imaging planes, but the boundary deformation information between slices cannot be achieved due to the lack of imaging data. A deformable model is used to reconstruct the shape of the LV: first, the dynamic equation governing the surface deformation is deduced; secondly the image planes are mapped to the reconstruction coordinate system. From the slices data, the external force exerted on the surface is defined and the elastic force of the surface is constructed by mean curvature of the deformation surface. Finally, a level set method is applied to solve the dynamic equation for LV shape. The results of the experiments demonstrate the effectiveness of the reconstruction algorithm.
Research on Boundary Concavities Segmentation via Snake Models
Wang Yuanquan, Tang Min, Pheng Ann Heng, Xia Deshen, and Xu Ye
2005, 42(7):  1179-1184. 
Asbtract ( 644 )   HTML ( 0)   PDF (388KB) ( 658 )  
Related Articles | Metrics
Snake models are extensively used from its debut in image processing and motion tracking, but its poor convergence on concave boundary is a handicap for object location. Although, the GVF Snake model shows high performance for this problem, but it suffers from costly computation by virtual of PDE's and another so-called critical point problem for the initial contour selection. In order to improve the performance of the traditional Snake model for concavity segmentation, a new external force based on the local curvature of the discrete contour and a two-stage Snake-based algorithm are proposed. The local curvature of the discrete contour, which characterizes the bending of a contour associated with a direction, is defined using the center of the inscribed circle of the triangle derived from three consecutive contour nodes. The first stage of the new method is a traditional Snake, and in the second stage the new force would drive the contour into the concave region. This new force can also be generalized to enlarge the capture range of the Snake model. In this case, it can be considered as a generalization of the balloon force. In order to overcome the difficulty of determining the magnitude, the magnitude is set to be small and the gradient-based force is first used as resistance; when the contour is converged, the gradient-based force swerves to attract the contour. Generalized in this way, the capture range is enlarged and there is no critical point problem. The experimental results validate the performance of this method.
Automatic Estimation of Visual Speech Parameters
Wang Zhiming, Cai Lianhong, and Ai Haizhou
2005, 42(7):  1185-1190. 
Asbtract ( 501 )   HTML ( 0)   PDF (374KB) ( 371 )  
Related Articles | Metrics
Visual speech parameter estimation has an important role in the study of visual speech. In this paper, 24 speech correlating parameters are selected from MPEG-4 defined facial animation parameter (FAP) to describe visual speech. Combining the statistic learning method and rule based method, precise tracking results are obtained for mouth contour and facial feature points based on facial color probability distribution and priori knowledge on shape and edge. High frequency noise in reference points tracking is eliminated by low-pass filter, and main face pose is estimated from the four most evident reference points to remove the overall movements of the face. Finally, precise visual speech parameters are computed from the movement of these facial feature points, and these parameters have already been used in some related applications.
A Novel Video Caption Detection Approach Using Multi-Frame Integration
Wang Rongrong, Jin Wanjun, and Wu Lide
2005, 42(7):  1191-1197. 
Asbtract ( 390 )   HTML ( 0)   PDF (514KB) ( 629 )  
Related Articles | Metrics
Captions in videos often play an important role in video information indexing and retrieval. In this paper, a novel video caption detection approach is presented. This approach first applies a new multiple frames integration (MFI) method to reduce the complexity of the background of the image. A time-based minimum (or maximum) pixel value search is employed and a Sobel edge map is used to determine the mode of search. Then block-based text detection is performed, i. e. a small window is used to scan the image and classified as text or non-text, using Sobel edges as features. A two-level pyramid is applied to detect various text sizes. Finally, the approach presents a new iterative text line decomposition method, and accurate text bounding boxes are extracted from the candidate text areas. Experimental results show that the proposed approach achieves a high precision and recall.
A New Low Bit-Rate Image Coding Scheme
Wang Xiangyang, and Yang Hongying
2005, 42(7):  1198-1203. 
Asbtract ( 425 )   HTML ( 3)   PDF (372KB) ( 415 )  
Related Articles | Metrics
In this paper, an embedded image coder is presented, which provides very effective, progressive image transmission while achieving superior visual performance for SPIHT. The visual gains are achieved by exploiting human visual masking characteristics to weigh wavelet coefficients according to their perceptual importance, causing a reordering of the coefficients. The reduced bit-rates are achieved by merging the first and second passes of the SPIHT algorithm. The experiment results show that the new still image compression scheme provides higher perceptual quality and higher PSNRs than SPIHT etc., especially at low bit rates.
Research and Analysis of Probability Logic Based on Universal Logics
Wang Wansen, and He Huacan
2005, 42(7):  1204-1209. 
Asbtract ( 393 )   HTML ( 1)   PDF (313KB) ( 507 )  
Related Articles | Metrics
Probability logic is an important foundation for uncertainty reasoning. However, there is a lot more to do to make it perfect. Universal logics is a new flexible logic system established in the study of various nondeterministic problems. It is an abstraction for building concrete logic systems. Theoretically, probability logic is a special example of universal logics. In this paper, several classical models are analyzed, problems in these models are pointed out, and new methods to solve them based on universal logics are presented.
EM-GMPF:An EM-Based Gaussian Mixture Particle Filter Algorithm
Li Jing, Chen Zhaoqian, and Chen Shifu
2005, 42(7):  1210-1216. 
Asbtract ( 528 )   HTML ( 1)   PDF (476KB) ( 534 )  
Related Articles | Metrics
Particle filter is a new real time inference algorithm, which is based on Bayesian inference and Monte Carlo method. Because of its unique characteristics such as being flexible, easy to implement, and parallelizable, and being efficient for processing nonlinear problems, particle filter becomes a new and very promising hot topic in applied statistics, signal processing, and artificial intelligence communities. Moreover, it has been applied to many applications such as object tracking and etc. The biggest problem which influences the estimation performance in a particle filter is sample depletion brought by resampling step. This paper focuses on solving this problem from the representation method of particles, and an EM-based Gaussian mixture particle filter is presented. It is demonstrated by computer simulation and visual tracking that the proposed method can reduce the need of sampling numbers and improve the estimation performance of particle filter.
An Approach for Reduction of Continuous-Valued Attributes
Shang Lin, Wan Qiong, Yao Wangshu, Wang Jingen, and Chen Shifu
2005, 42(7):  1217-1224. 
Asbtract ( 485 )   HTML ( 1)   PDF (448KB) ( 450 )  
Related Articles | Metrics
Attributes reduction is the main application of rough set theory. The present methods for reduction are mainly applicable to information systems with discrete values. For the continuous-valued attributes reduction, the common way is to get discrete intervals of values first and then transform the continuous values into the discrete ones. In such discretization, some information will be lost, which may influence the reduction. In this paper, a new approach for reduction of continuous-valued attributes (ReCA) is presented, which integrates the discretion and reduction using information entropy-based uncertainty measures and evolutionary computation. Experimental results show that the approach ReCA is effective for reduction of continuous-valued attributes, and can get less attributes and good precisions compared with the methods of rough set and C4.5 decision tree.
VLSI Design for Full-Search Block-Matching Full-Pel Motion Estimation Processor
He Weifeng, Mao Zhigang, Lü Zhiqiang, and Yin Haifeng
2005, 42(7):  1225-1230. 
Asbtract ( 583 )   HTML ( 0)   PDF (417KB) ( 419 )  
Related Articles | Metrics
An improved architecture for motion estimation using the full-search block-matching algorithm is proposed in this paper. To reduce the utilization of the global bus to the external memory and to improve the data reuse efficiency of search frame pixels, a multi-port matching scheme and double clock strategy are adopted. Compared with the previous FBMA architecture, this new architecture achieves 74.9% processor utilization as well as improves the reuse efficiency of search area pixel data. The motion estimation processor is implemented using the TSMC 0.25μm 1-poly 5-metal CMOS technology, which occupies a silicon area of 3.37mm×3.37mm and operates at 110MHz. Experimental results show that it is able to estimate full pixel motion vectors of MPEG-4 AS profile sequences in ITU-R601 format (720×480@30Hz/NTSC or 720×576@25Hz/PAL) in real-time at around 89.4MHz
An Improved TFQMR Algorithm for Large Linear Systems Suited to Parallel Computing
Liu Jie, Chi Lihua, Hu Qingfeng, and Li Xiaomei
2005, 42(7):  1235-1240. 
Asbtract ( 740 )   HTML ( 0)   PDF (402KB) ( 501 )  
Related Articles | Metrics
The transpose-free QMR(TFQMR) algorithm is a Krylov subspace algorithm that can be used to obtain fast solutions for linear systems with very large and very sparse coefficient matrices. In this paper, by changing the computation sequence in the TFQMR algorithm, an improved transpose-free QMR (ITFQMR) algorithm is proposed. The numerical stability of the ITFQMR algorithm is the same as TFQMR algorithm, but the synchronization overhead that represents the bottleneck of the parallel performance is effectively reduced by a factor of two. And all inner products of a single iteration step are independent and communication time required for inner product can be overlapped efficiently with computation time of vector updates. From the theoretical and experimental analysis it is found that the ITFQMR algorithm is faster than the TFQMR algorithm as the number of processors increases. The experiments performed on a 64-processor cluster indicate that the ITFQMR is approximately 20% faster than the TFQMR.
A Surface-Based DNA Algorithm for the Perfect Matching Problem
Chen Zhiping, Li Xiaolong, Wang Lei, Lin Yaping, and Cai Lijun
2005, 42(7):  1241-1246. 
Asbtract ( 560 )   HTML ( 0)   PDF (332KB) ( 481 )  
Related Articles | Metrics
Using the method of fluorescence labeling, a new DNA algorithm of the perfect matching problem based surface is presented in this paper. By fixing the DNA molecules of the solution space on the solid carrier, all solutions of the perfect matching problem by the biochemical actions can be acquired. Compared with other surface-based DNA algorithms for maximal matching problem, this algorithm can precisely get the edges existing in any perfect matching without using observation, and the edge order hasn't influence on the solution generating process. Therefore, the new algorithm can get better performance.
Review of Software Architecture Analysis and Evaluation Methods
Liu Xia, Li Mingshu, Wang Qing, and Zhou Jinhui
2005, 42(7):  1247-1254. 
Asbtract ( 569 )   HTML ( 0)   PDF (374KB) ( 639 )  
Related Articles | Metrics
Software architecture (SA) is emerging as the primary research area in software engineering and one of the key technologies to the development of large-scale software system and product line system. The purpose of SA analysis and evaluation is to identify the potential risks and help make proper architecture decision. Based on the concept of SA, basic definition as different views of software architecture descriptions is classified and summarized. The recent representative SA analysis and evaluation methods and supporting tools are introduced and reviewed. Additionally, some issues of study on SA analysis and evaluation are discussed, and the causes are explained at the same time. Finally, it is concluded with the promising tendency about the SA analysis and evaluation. The purpose of this work is to compare the advantages and disadvantages of the representative methods and tools, and then to provide supporting for using of the suitable methods and tools for architecture evaluation and assessment.
An Integrated Spatio-Temporal Forecasting Approach Based on Data Fusion and Method Fusion
Xu Wei, Huang Houkuan, and Wang Yingjie
2005, 42(7):  1255-1260. 
Asbtract ( 682 )   HTML ( 2)   PDF (397KB) ( 1251 )  
Related Articles | Metrics
Spatio-temporal data mining is an important research topic in data mining, and in which spatio-temporal forecasting is the most widely used. In order to overcome the limitations of current spatio-temporal forecasting methods, this paper proposes a spatio-temporal forecasting approach based on data fusion and method fusion. The approach first forecasts time sequence of the target object itself with statistical principles and computes influences of neighboring objects employing neural network technique, then forecasts mixed data sequence using spatio-temporal auto-regressive model, and finally integrates the individual time sequence forecast, spatial forecast and spatio-temporal forecast through linear regression to deliver the final result. The approach was successfully used in the forecasting of railway passenger flow in an attempt to overcome the limitations of traditional railway passenger flow forecasting methods. The experimental result shows the effectiveness of the approach.
XML Indexing Technology Based on IRST
Lei Xiangxin, Hu Yunfa, Yang Zhiying, Liu Yong, and Zhang Kai
2005, 42(7):  1261-1271. 
Asbtract ( 583 )   HTML ( 1)   PDF (700KB) ( 415 )  
Related Articles | Metrics
A new numbering scheme for labeling rooted trees based on leaf order interval numbering scheme (LOINS) is proposed. The nodes in a rooted tree can be encoded in once traversal of the tree, and the ancestor-descendantship among nodes can be determined in constant time based on LOINS. Furthermore, IsBaRTI-I, a novel index for rooted tree structure data model, is proposed which takes the advantages of IRST, such as indexing and compressibility. IsBaRTI-II, the space optimization version of IsBaRTI-I, is also introduced. IsBaRTI-I,II indexes the ancestor-descendantship among nodes and the LOINS number of node by the name (label) of the node and the count of its appearance in the rooted tree. In this way, indexing structure and numbering scheme becomes a unit unity. Theory analysis and experiment result illustrates that IsBaRTI-I,II needs more little time and capacity to be built; the node series and path matching XPath expressions can be obtained more quickly than the previous XML indexes through IsBaRTI-I,II.
Design of a Servent Based Operating System
Li Hong, Chen Xianglan, Wu Mingqiao, Gong Yuchang, and Zhao Zhenxi
2005, 42(7):  1272-1276. 
Asbtract ( 408 )   HTML ( 2)   PDF (246KB) ( 516 )  
Related Articles | Metrics
On the point of software engineering, the microkernel concept is superior to the monolithic kernel concept. However, it is also widely believed that the microkernel based systems are inherently inefficient. Servent model is a novel operating system constructural model. This model takes servent and exe-flow as its storage and execution abstractions respectively and makes them independent from each other. On the basis of the model, the operating system may achieve higher efficiency than the microkernel system and better extendibility than the monolithic kernel system. A prototype system MiniCore was designed based on this model and developed for the Intel i386 platform. MiniCore is also ported to a route platform. Finally, the performance of the operating system and the model is presented.
Test Response Compactor for Scan-Based Circuit
Han Yinhe, Li Xiaowei, and Li Huawei
2005, 42(7):  1277-1282. 
Asbtract ( 504 )   HTML ( 0)   PDF (329KB) ( 368 )  
Related Articles | Metrics
A novel sequential compactor called Awl-compactor is presented to compact the response data during testing. The proposed Awl-compactor can be embedded in the circuit as a kind of on-chip test resource. Due to the single output, the best compaction ratio can be obtained by the Awl-compactor. Based on the distribution analysis of error bits during scan test, the two design rules will be proposed to avoid 2, 3 and any odd errors cancellation, including not only the static error bits cancellation but also the dynamic error bits cancellations. These two rules can also be used to handle the masking of unknown bits in response data. One error bit with one unknown bit should be detected if the two proposed rules are satisfied. In order to generate the register transfer level design or gate netlist design of the compactor automatically in the conventional design flow, a synthesis algorithm based on random selection is presented. Some experiments on the analysis of area overhead and error bits cancellations are conducted. The experimental data shows that the proposed compactor can obtain the maximum compaction ratio and advanced performance only with small area overhead penalty compared to the previous techniques.