Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 October 2005, Volume 42 Issue 10
Paper
Secure Multi-Party Computation of Set-Inclusion and Graph-Inclusion
Li Shundong, Si Tiange, and Dai Yiqi
2005, 42(10):  1647-1653. 
Asbtract ( 530 )   HTML ( 1)   PDF (384KB) ( 777 )  
Related Articles | Metrics
Secure multi-party computation is a research focus in international cryptographic community. This paper focuses on the secure multi-party computation of set-inclusion and graph-inclusion problems and a secure multi-party computation solution to set-inclusion problem is proposed. Based on this solution and combined with Monte Carlo approach and Cantor encoding, an approximate secure multi-party computation solution to graph-inclusion problem is further proposed. It is proved by simulation paradigm that these solutions are secure. Comparing with known solutions, these solutions can solve more complicated problems and have, with lower communication complexity, more applicability. For some problems that can be solved with known solutions, the new solutions have less computational complexity.
Research on a Fuzzy Logic-Based Subjective Trust Management Model
Tang Wen, Hu Jianbin, and Chen Zhong
2005, 42(10):  1654-1659. 
Asbtract ( 631 )   HTML ( 5)   PDF (329KB) ( 966 )  
Related Articles | Metrics
Trust management model is fundamental for information security in open networks. In this paper, the fuzzy nature of subjective trust is considered, and the conceptions of linguistic variable and fuzzy logic are introduced into subjective trust management. A formal trust metric is given, fuzzy IF-THEN rules are applied in mapping the knowledge and experiences of trust reasoning that humanity use in everyday life into the formal model of trust management, the reasoning mechanisms of trust vectors are given, and a subjective trust management model is provided. The formal model proposed provides a new valuable way for studying subjective trust management in open networks.
Secret Sharing-Based Rerouting in Rerouting-Based Anonymous Communication Systems
Sui Hongfei, Chen Jian'er, Chen Songqiao, and Zhu Nafei
2005, 42(10):  1660-1666. 
Asbtract ( 386 )   HTML ( 1)   PDF (389KB) ( 499 )  
Related Articles | Metrics
Rerouting is the main mechanism in rerouting-based anonymous communication system for protecting the anonymity of communication participant. Next hop rerouting, an important type of rerouting that is adopted in many rerouting-based anonymous communication systems, is easy to compromise by predecessor attack, and has the drawback of high communication delay. In this paper, a new next hop rerouting mechanism, secret sharing-based rerouting(SSR), is proposed based on threshold scheme. With this mechanism, end-to-end encryption between the sender and the last intermediator on the rerouting path can be achieved. Thus, the complexity for attackers to compromise the anonymity is made high. Theoretical analysis demonstrates that the complexity can be kept on the same level as source rerouting. Moreover, the sender can control the path length effectively, therefore limiting the communication delay.
Stationarity and Balance—Strategies and Methods of Elliptic Curve Cryptosystem Against Side Channel Attacks
Liu Duo, Dai Yiqi, and Wang Daoshun
2005, 42(10):  1667-1672. 
Asbtract ( 383 )   HTML ( 0)   PDF (317KB) ( 460 )  
Related Articles | Metrics
Side channel attacks are a recent class of attacks, which use observations such as timings or power consumption measurements in order to obtain information that is supposed to be kept secret and revealed to be very powerful in practice. Inelliptic curve cryptosystems, a particular target for side channel attacks is algorithms used for point multiplication. For speeding up elliptic curve scalar multiplication and making it secure against side channel attacks, various methods are proposed using specially chosen elliptic curves, the special presentations of points on the curve, and other techniques. The surveys of the achievements of algorithms and implementations of elliptic curve cryptosystem to defense against side channel attacks based on the two main views of stationarity and balance are presented. The advantage and disadvantage of each method are also pointed out here. Finally, several main directions of future research on this topic are pointed out as well.
A Dynamic Defense Against Denial of Service in Two-Party Security Protocols
Wei Jianfan, Duan Yunsuo, Tang Liyong, and Chen Zhong
2005, 42(10):  1673-1678. 
Asbtract ( 490 )   HTML ( 5)   PDF (370KB) ( 407 )  
Related Articles | Metrics
Denial of service (DoS) is a kind of active attack that aims to prevent authorized user to access services, DoS vulnerabilities with different degrees exist in various of security protocols. A new counter measure based on session identifier and proof of work is presented, and then it is analyzed in a formal way proposed by Meadows originally. In addition, some useful principles are provided in designing network DoS resistant protocols. By using this counter measure, two-party security protocols can be designed or modified against DoS attack in a dynamic way and its security properties will not be lost.
Intrusion-Detection Alerts Processing Based on Fuzzy Comprehensive Evaluation
Mu Chengpo, Huang Houkuan, Tian Shengfeng, Lin Youfang, and Qin Yuanhui
2005, 42(10):  1679-1685. 
Asbtract ( 450 )   HTML ( 2)   PDF (352KB) ( 883 )  
Related Articles | Metrics
An algorithm based on fuzzy comprehensive evaluation for correlating the alerts produced by intrusion detection systems is presented. The paper also gives an approach to learn the confidence metric for each type of alerts, which can be used to filter alerts further. The false positive alerts and duplicate alerts can be reduced significantly by using both the correlation algorithm and the confidence learning method. Meanwhile, the working intensity of network administrators can be reduced gradually. In addition, the correlated alerts are helpful to capture the logical steps or strategies behind attacks and choose appropriate actions to stop ongoing attacks. It can be potentially used to integrate different kinds of security tools together in order to realize the goal of cooperative defence for network systems.
An Optimistic Payment Protocol Based on Mobile Agents
Liu Yi, Pang Liaojun, and Wang Yumin
2005, 42(10):  1686-1691. 
Asbtract ( 397 )   HTML ( 0)   PDF (307KB) ( 417 )  
Related Articles | Metrics
Mobile agent systems are identified as a programming paradigm that allows flexible structuring of distributed computation over the Internet. It is expected that they will play an important role in the future information society and especially in e-commerce applications. Despite their benefits, their adoption is largely hampered by the new security issues they raise. In this paper, by using verifiable secret sharing scheme and the theory of cross validation, an optimistic payment protocol based on mobile agents is proposed. This protocol can protect the confidentiality of sensitive payment information carried by mobile agents from being spied by malicious hosts, and mobile agent can verify that the product which he is about to receive is the one he is paying for without exposing it. Moreover, the trusted third party is offline unless someone is misbehaving or prematurely aborting.
The Combined Use of FAPKC Without Compromising the Security of the Cryptosystem
Han Xiaoxi and Yao Gang
2005, 42(10):  1692-1697. 
Asbtract ( 471 )   HTML ( 1)   PDF (329KB) ( 539 )  
Related Articles | Metrics
It is a maxim of sound computer security practice that a cryptographic key should have only a single use. But S.Haber and B. Pinkas show that in many cases, the simultaneous use of related keys for two cryptosystems, e.g. for a public key encryption system and for a public key signature system, does not compromise their security. Finite automata public key cryptosystem can be used for both encryption algorithm and signature algorithm, it is a combining cryptosystem. In this paper, it is proved that the combined use of finite automata public key cryptosystem does not compromise the security of the cryptosystem.
A Queue Management Algorithm Fit for Network Processors
Zheng Bo, Lin Chuang, and Li Yin
2005, 42(10):  1698-1705. 
Asbtract ( 566 )   HTML ( 0)   PDF (484KB) ( 576 )  
Related Articles | Metrics
Following the model of proportional differentiated services, a queuing management algorithm is presented, which is fit for network processors. This algorithm contains two parts, the one using for loss ratio control while packets come into the queue called RR-PLR (round-robin based proportional loss rate); and the one using for delay control while packets go out of the queue called WRR-PAD (WRR based proportional average delay). To avoid division and sort operation, this algorithm uses the round-robin mechanism. Its complexity is O(1) and it's very fit for the architecture of network processors. The simulation in NS2 shows that this algorithm can achieve the proportional differentiated service in both average packets loss ratio and average queuing delay. A prototype of the algorithm has also been implemented in the Intel IXP2400 platform. The test result shows that the whole system can reach 1.125Gbps (64 bytes per packet, i.e. 2.25Mpps) and achieve the proportional differentiated service at the same time.
On Retrieval System Evaluation of Search Engines
Peng Bo and Yan Hongfei
2005, 42(10):  1706-1711. 
Asbtract ( 564 )   HTML ( 1)   PDF (354KB) ( 1125 )  
Related Articles | Metrics
Evaluation of Web search brings challenges into the traditional evaluation methods of information retrieval systems. In this paper, the query set with different user's information categories is constructed by analyzing the query log of Tianwang search engine. In the evaluation experiments for three popular search engines, the differences of indexed document sets are reduced by filtering the query results on the InfoMall Web archive. Experiments show that: ①Significant differences are found in voluntary assessors, but the results of evaluation keep stable, ②Continuous relevant scores and corresponding measures have better distinction capability than the binary ones, and ③Query set with size of 50 is enough for the evaluation measure DCG in the Web search evaluation.
An Adaptive Initiative Predictive Handoff Mechanism for Mobile IPv6
Ou Yingfeng, Li Renfa, and Xia Shunhui
2005, 42(10):  1712-1717. 
Asbtract ( 452 )   HTML ( 1)   PDF (310KB) ( 419 )  
Related Articles | Metrics
There are still large handoff latency and the packet being easier to lose on mobile IPv6 handoff. The handoff latency is composed by movement latency and register latency, and the handoff latency is its main section. In this paper, an adaptive initiative predictive neighbor unicast handoff scheme is presented. The adaptive initiative predictive algorism is adopted, predictions are intellectively carried out according to the network conditions and the movement of the mobile host, and the algorithm adjusts itself corresponding to the feedbacks. The special situation of Pingpang (table-tennis) effect is also fully considered. Besides, combined with the hierarchical mobility management scheme, the latency of registration is also reduced, and by adopting the predictive neighbor unicast, the network loads are reduced. Finally, a fast and smooth handoff is achieved.
Face Image Super-Resolution Reconstruction Based on Recognition and Projection onto Convex Sets
Huang Hua, Fan Xin, Qi Chun, and Zhu Shihua
2005, 42(10):  1718-1725. 
Asbtract ( 457 )   HTML ( 2)   PDF (660KB) ( 1182 )  
Related Articles | Metrics
Face image super-resolution reconstruction (SRR) can be widely used in forensic analysis and video surveillance. With the recognition-based idea, the statistical characteristics of face images are investigated and incorporated into SRR. Based on the set theoretic formulation, a projection onto convex sets (POCS) algorithm is applied to find the solution to face image reconstruction. Compared with the traditional POCS based SRR methods, the proposed approach imposes some extra constraint sets to the solution. The experiment results on frontal face images show that the proposed approach gains a better performance both on noise suppression and reconstruction quality and has the advantage of simplicity in computation.
Moving Object Tracking Based on Location and Confidence of Pixels
Shi Hua, Li Cuihua, Wei Fengmei, and Wang Huawei
2005, 42(10):  1726-1732. 
Asbtract ( 525 )   HTML ( 3)   PDF (435KB) ( 702 )  
Related Articles | Metrics
Moving object tracking is a critical issue of image sequence processing. In this paper, a moving object tracking algorithm based on location and confidence of pixels is proposed. Firstly, the moving objects are detected by combining the median background model in temporal domain with the minimum cross-entropy in spatial domain. Then the rectangle area of the objects are obtained, and at the same time an HSV color distribution model is used to measure the similarity between target rectangles and hypothetical rectangles. In this process, a weighting function based on location and confidence of pixels is presented to weigh the pixel values in the rectangle area of the tracking. The experimental results show that the algorithm is computationally efficient and robust to scale invariant, partial occlusion and interactions of non-rigid objects, especially similar objects.
Multiple Reference Minutiae Based Fingerprint Matching
Zhu En, Yin Jianping, and Zhang Guomin
2005, 42(10):  1733-1739. 
Asbtract ( 418 )   HTML ( 1)   PDF (527KB) ( 622 )  
Related Articles | Metrics
Proposed in this paper is a minutia matching method based on global alignments of multiple pairs of reference minutiae. These reference minutiae are commonly distributed in various fingerprint regions. When matching, these pairs of reference minutiae are globally aligned, and those region pairs far away from the original reference minutiae are aligned more satisfactorily. Experiment shows that this method leads to improvement in system identification performance.
Tracking Cardiac MRI Tag by Markov Random Field Theory
Tang Min, Wang Yuanquan, Pheng Ann Heng, and Xia Deshen
2005, 42(10):  1740-1745. 
Asbtract ( 485 )   HTML ( 4)   PDF (480KB) ( 486 )  
Related Articles | Metrics
Tag tracking is a pre-step of the heart motion construction. Put forward in this paper is a new tag tracking method based statistical approach. The method uses the grid model as the basic structure. First, the method sees the grid nodes' positions as a Markov random field (MRF) model, and uses the EM algorithm to classify the nodes into two sorts by whether the node is in the ventricles. Then, the method designs different priori distribution and likelihood function for different sorts based on the function performed in the tracking progress. The method utilizes the iterated conditional modes (ICM) algorithm to do maximum a posteriori (MAP) estimation. The method is tested on several sequences of cardiac systole MRI. The result shows that the method can classify the grid nodes, so it can exactly track the SPAMM tag lines, and because of taking the grid model's Markov character into consideration, the grid can keep its topological shape during tracking progress.
Different Complex Wavelet Transforms for Texture Retrieval and Similarity Measure
Shang Zhaowei, Zhang Mingxin, Zhao Ping, and Shen Junyi
2005, 42(10):  1746-1751. 
Asbtract ( 421 )   HTML ( 0)   PDF (521KB) ( 503 )  
Related Articles | Metrics
Complex wavelet transform overcomes the drawbacks of discrete wavelet transform, such as shift sensitivity, poor directionality and lack of the phase information. In this paper, the performance of the first-order and the second-order (co-occurrence) statistical characters of the different complex wavelet transforms (CWT) is studied with the consideration of the wavelet energy, and applied to texture feature extraction. It is concluded that the performance of the CWT is better than the pyramid discrete wavelet decomposition transforms (PDWT) on the texture feature extraction through theory analysis and the contrast experiments results on the texture retrieval. Best performance is achieved by combining the first-order signatures with the second-order signatures and the performance of retrieval is raised 8%.
A Cluster Algorithm of Automatic Key Frame Extraction Based on Adaptive Threshold
Wang Fangshi, Xu De, and Wu Weixin
2005, 42(10):  1752-1757. 
Asbtract ( 549 )   HTML ( 2)   PDF (389KB) ( 985 )  
Related Articles | Metrics
It is a common method to extract key frames using the unsupervised cluster algorithm. But the algorithm is sensitive to the initial number of the classes and the initial classification. It is problematic to predefine the absolute number of key frames without knowing the video content. An approach for two times clustering is presented. In the first time, the similarity distances of the consecutive frames in a shot are clustered into two classes so that the thresholds needed in the second time clustering process can be determined adaptively. In the second time clustering, all the frames in the shot are clustered using dynamic cluster ISODATA algorithm. Then the frame nearest to the center of its class is automatically extracted as one key frame in the shot. It is simple and effective with no need to predefine any threshold. Experimental results of many videos with different traits demonstrate the good performance of the proposed algorithm.
A New Zero-Block Detection Method for H.264/AVC
Cheng Yun, Dai Kui, Wang Zhiying, Shen Li, and Guo Jianjun
2005, 42(10):  1758-1762. 
Asbtract ( 357 )   HTML ( 0)   PDF (325KB) ( 380 )  
Related Articles | Metrics
There is a simple and effective rapid implementation which is called zero block detection method existing in the video coding schemes based on motion estimation and prediction. H.264/AVC is a video coding scheme based on motion estimation and prediction, so the encoding time can be decreased by using the zero-block detection method. According to the new characteristics of H.264/AVC, the threshold for zero-block detection is induced and a new zero-block detection method is proposed in this paper. Experimental results illustrate that this new zero-block detection method decreases the encoding time by 20% to 47% without causing great decrease of encoding efficiency under the mid-low bit-rate video encoding.
An Efficient Matching Algorithm for RDF Graph Patterns
Wang Jinling, Jin Beihong, and Li Jing
2005, 42(10):  1763-1770. 
Asbtract ( 995 )   HTML ( 2)   PDF (416KB) ( 1284 )  
Related Articles | Metrics
The semantic Web is increasingly accepted as the next generation of WWW. The foundation of semantic Web is resource description framework (RDF). As more and more information is represented in RDF format, the efficient dissemination and filtering of RDF information becomes an important problem. In information dissemination systems under the semantic Web, input RDF data should be matched with a database of user profiles, which can be represented as RDF graph patterns. Based on the characteristics of RDF and several restrictions on the RDF graph, a novel matching algorithm is proposed for RDF graph patterns. RDF graph and RDF patterns are all traversed from a special node to form BFS trees; the matching state of two BFS trees are represented as an AND-OR tree to avoid backtracking. RDF patterns are indexed according to the concept model to further improve efficiency. Experimental results show that the algorithm is much more efficient than conventional graph matching algorithms.
Simultaneous Sliding Window Join Approach over Multiple Data Streams
Qian Jiangbo, Xu Hongbing, Wang Yongli, Liu Xuejun, and Dong Yisheng
2005, 42(10):  1771-1778. 
Asbtract ( 540 )   HTML ( 0)   PDF (507KB) ( 568 )  
Related Articles | Metrics
Recently there has been a growing interest in sliding window join for scenarios,in which streams arrive at very high rates and a data stream management system is registered with many simultaneous queries. In order to process these continuous queries, a novel window join approach named M3Join and its implementation architecture Roujoin are proposed. Roujoin contains a join-routing-table and several join-areas, and is initialized or updated according to those simultaneous queries. Each tuple in the data streams is extended with a route tag. When an original tuple arrives, it is inserted into the corresponding buffer in one of the join-areas. Then it searches the join-routing-table and switches into the right join-area to perform join operations or return to the end users. The generated join tuples, whose route tags have been updated, iterate the above search and join procedures until there is no join result produced. Other original tuples are processed in the same way. The approach needs only one scan over the data streams since different join queries share the intermediate results. The experimental results indicate that the approach is feasible and efficient.
A Survey of Transaction Processing Technology
Ren Yi, Wu Quanyuan, Jia Yan, Han Weihong, and Guan Jianbo
2005, 42(10):  1779-1784. 
Asbtract ( 601 )   HTML ( 3)   PDF (291KB) ( 970 )  
Related Articles | Metrics
Transaction processing is important for ensuring consistency and reliability of information. In this paper, the originality and the development of this technology are traced back, its current research status is summarized, and then a flat transaction model and its related technologies are discussed. Since the flat transaction model is not suitable for complex and distributed applications, extended transaction models and transactional workflow came into being. These technologies are analyzed in detail. Then transaction processing in advanced database technologies is related. With the development of Web service based cross-organizational business process technology, transaction processing in open and loosely coupled Web environment becomes one of the focuses. So the latest related work is analyzed, and the discussion of future directions and challenges is presented.
Study of Some Key Techniques in Mining Association Rule
Chen Geng, Zhu Yuquan, Yang Hebiao, Lu Jieping, Song Yuqing, and Sun Zhihui
2005, 42(10):  1785-1789. 
Asbtract ( 443 )   HTML ( 2)   PDF (335KB) ( 821 )  
Related Articles | Metrics
The apriori algorithm has become a classic method for mining association rules. The difficulties and operation quantity of the apriori algorithm consist of the following two aspects: (1) how to generate candidate frequent itemsets and to calculate its support, (2) how to reduce the size of candidate frequent itemsets and times of accessing I/O. At present, there are many methods that can solve the second problems very well. However, very few methods have been presented to solve the first problem. An efficient and fast algorithm based on binary format for discovering candidate frequent itemsets and calculating the support of itemsets is proposed, which only executes some logical operation. A performance comparison of this algorithm with the apriori-like algorithms is given,and the experiments show that the new algorithm is more efficient.
The HJPS Training Algorithm for Multilayer Feedforward Neural Networks
Li Yanlai, Wang Kuanquan, and David Zhang
2005, 42(10):  1790-1795. 
Asbtract ( 523 )   HTML ( 1)   PDF (395KB) ( 433 )  
Related Articles | Metrics
Based on the Hooke-Jeeves pattern search method of optimization theory, a fast training algorithm, HJPS, is proposed for multilayer feedforward neural networks in this paper. It consists of two alternating steps: exploratory search and pattern move. In the training process, only the changed part of the error function is considered. The results of simulations, including a function approximation problems and a pattern recognition problem, show that the propounded algorithm is remarkably improved compared with BP and other faster algorithms in terms of converging speed and computing time. The high generalization ability of HJPS algorithm is also demonstrated by the experimental results.
Feature Selection for Cancer Classification Based on Support Vector Machine
Li Yingxin and Ruan Xiaogang
2005, 42(10):  1796-1801. 
Asbtract ( 662 )   HTML ( 1)   PDF (343KB) ( 894 )  
Related Articles | Metrics
Feature selection is an essential step to perform cancer classification with DNA microarrays, for there are a large number of genes from which to predict classes and a relatively small number of samples. This work addresses the problem of selection of a small subset of genes for classification from broad patterns of gene expression profiles by proposing a two-step feature selection method. The first step uses a new metric proposed in this paper as the criteria for class separability to remove the genes irrelevant to the classification task, and then a support vector machine with radial basis function kernel is applied to validate the classification performance of the genes selected for distinguishing different tissue types. The second step filters out the redundant genes by the sensitivity analysis based on the support vector machine classifier after pair-wise redundancy analysis. The two steps are applied to the gene expression profiles of human acute leukemia, and a better and more compact gene subset is obtained in contrast with the baseline method, which shows the feasibility and effectiveness of the method proposed.
QuCOM: A QoS Management Model for Component System
Liao Yuan, Huai Xiaoyong, and Li Mingshu,
2005, 42(10):  1802-1808. 
Asbtract ( 671 )   HTML ( 0)   PDF (433KB) ( 379 )  
Related Articles | Metrics
In variation-performance environments, multiple applications compete and share a limited amount of system resources, and suffer from variations in resource availability. QoS-aware applications are desired to adapt themselves to the dynamic environments for QoS guarantee in runtime. Since system resource is managed globally, adaptation mechanisms are built not only within an application, but also in a system where the application is running. In this paper, a QoS model (QuCOM) for component system is given to support application adaptation to dynamic system. For verification purpose, the results of an experiment example (a video stream application) are presented.
An Algorithm for Finding All Hamiltonian Cycles in Digraph via Hierarchical Correlation
Wen Zhonghua, and Jiang Yunfei
2005, 42(10):  1809-1814. 
Asbtract ( 609 )   HTML ( 2)   PDF (296KB) ( 422 )  
Related Articles | Metrics
The problem to find all Hamiltonian cycles in a digraph is NP-hard, and there isn't an efficient algorithm to find all Hamiltonian cycles in a digraph. The traditional algorithm for finding all Hamiltonian cycles in a digraph is not information search, and an algorithm is designed for improving the speed of finding all Hamiltonian cycles in a digraph. Correlation of path and hierarchical correlation of path based on the concept of path are built in digraph. Based on hierarchical correlation, an algorithm for finding all Hamiltonian cycles in digraph is designed. Path and correlation of path in which path's length is k+1 are obtained by using hierarchical correlation of path in which path's length is k in the algorithm designed, and step by step, k+1 becomes greater and greater. When k+1 is equal to n-1, all Hamiltonian paths are obtained, so all Hamiltonian cycles are found through checking all Hamiltonian paths. The algorithms analysis indicate that the designed algorithm is more efficient than the current algorithms for finding all Hamiltonian cycles in digraph, so complexity of the algorithm is reduced. A new train of thought for calculating the Hamiltonian cycle problem is provided.
On the Problem of How to Place the Observers in Passive Testing
Zhao Baohua, Guo Xionghui, QianLan, and Qu Yugui
2005, 42(10):  1815-1819. 
Asbtract ( 362 )   HTML ( 0)   PDF (288KB) ( 458 )  
Related Articles | Metrics
The problem of how to place the observers in passive testing while assuring they can monitor all the behavior of a network and the number of them is the smallest is an interesting and useful problem. First, this problem is proved to be an NP-complete problem. Then a special case in which the topology of network is a tree is discussed and a linear time algorithm for it is presented. Based on the thought of tree topology an improved approximate algorithm is proposed and the approximate proportion of the improved algorithm is 2-O(1). Finally, a simulating experiment is conducted to illustrate the effectiveness of the improved algorithm and future extensions are discussed.
Research on I/O Optimizations in Out-of-Core Computation
Tang Jianqi, Fang binxing, Hu Mingzeng, and Wang Wei
2005, 42(10):  1820-1825. 
Asbtract ( 959 )   HTML ( 1)   PDF (327KB) ( 742 )  
Related Articles | Metrics
Applications with large amounts of data bring the mode of out-of-core computation in which I/O becomes the important limiting factor because of the low speed of accessing data on disks. A method of using runtime library is presented for I/O optimizations. Three optimization strategies including data sieving on regular section, data prefetching and data reuse on the edge are described. Programmers may adopt corresponding APIs for different applications to reduce the execution time. The experiment results show that the performance of the out-of-core computation is efficiently improved by reducing the number of I/O operations and the amount of exchanged data between the main memory and disks as well as hiding part of the I/O operation latency.
1.5Gbps High Speed Serial Data Recovery Circuit Made from Standard Cells
Sun Yongming, and Lin Qi,
2005, 42(10):  1826-1831. 
Asbtract ( 507 )   HTML ( 0)   PDF (345KB) ( 540 )  
Related Articles | Metrics
In high speed serial interface integrated circuit design, the design of high speed serial data recovery circuit is a troublesome task. At gigabit transition rate, gigabit integrated circuits usually use analog circuits to perform gigabit rate functions such as clock generator and clock data recovery circuit. Compared with digital circuit, analog circuit has a lower noise tolerance, needs more area and power consumption, is more sensitive with process change, and has a lower testability. Additionally, integrating a large amount of analog circuit in a digital system is very difficult. In this paper an all-digital high speed serial data recovery circuit module for 1.5Gbps SATA interface implement is introduced. Without using PLL or DLL analog circuit, this circuit is an all-digital circuit made from standard cells. In contract to other design made from analog circuit, this all-digital circuit is an easily implemented design and it has lower power consumption and smaller area. This circuit is being implemented in an ATA/SATA interface controller chip which is designed and manufactured using a 0.18um CMOS process.