Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 October 2009, Volume 46 Issue 10
Paper
Performance Analysis of the 2-D Networks-on-Chip
Wang Wei, Qiao Lin, Yang Guangwen, and Tang Zhizhong
2009, 46(10):  1601-1611. 
Asbtract ( 353 )   HTML ( 1)   PDF (3793KB) ( 406 )  
Related Articles | Metrics
The interconnection networks-on-chip becomes an important factor affecting the performance of chip-multiprocessor. Almost all interconnection structures evolve from the 2-D networks. After analyzing the static networks characteristics of some popular 2-D interconnection networks-on-chip whose inner-nodes degree is 4, the authors propose a kind of interconnection networks-on-chip router and the communication protocol. The router uses buffers just on the outside port rather than on both the inside and the outside port to reduce the power and increase the speed in the networks transmission. Based on the analysis of the dynamic networks characteristics of those different structures by changing the scales and the loads, load per cost-delay product, a new evaluation to the all-sided performance of interconnection networks-on-chip, is proposed using the link to indicate the cost. Finally, the all-sided performance of those different 2-D interconnection networks-on-chip structures is analyzed and the suitable cases for each structure is indicated. The experimental results show that in the case of small-scale under low-load the multi-ring networks do well and in the case of larger-scale under larger load mesh grids work better. The wrap around structures is the first choice as long as it could be. However, in the case of large-scale, it is difficult to get good performance by just adopting any one of those structures directly.
A Coverage Directed Test Generation Platform for Microprocessors Using Genetic Approach
Shen Haihua, Wang Pengyu, Wei Wenli, and Guo Qi,
2009, 46(10):  1612-1625. 
Asbtract ( 511 )   HTML ( 1)   PDF (1573KB) ( 682 )  
Related Articles | Metrics
Due to the increasing system complexity of the hardware design and stringent time-to-market constraint, how to generate high quality test cases to meet the requirement of functional verification is still a major challenge in a typical design cycle. Random test generation technology is one of the most important methods for the verification of modern VLSI. Specifically, coverage directed test generation (CDG), which provides an efficient way for closing a feedback loop from the coverage domain back to the generator that produces potential high quality stimuli to DUV, is the main workhorse in todays random test generation study. Genetic algorithms are intelligent approach to automate the generation of effective solution for black-box optimization without requirements of experience knowledge and resources. In this paper, the authors present GA based CDG, a coverage directed test generation platform, where directives are continuously updated according to feedback based on present functional coverage reports, for the full-chip level verification of microprocessors based on deep discussion and analysis of coverage directed test generation technology using genetic approach. CDG has been taken into practice at the ICT for the verification of a general RISC microprocessor—GODSONI. Experimental results show that CDG can apparently accelerate the verification process and improve the reached coverage from 83.3% to 91.7% compared with original CRPG after about 600 million instructions have been simulated, which implies that verification efficiency is greatly improved and skilled manpower is cut down dramatically.
Feedback Utilization Control for Heterogeneous Real-Time Clusters
Wang Jie, Wang Hongan, Fu Yong, and Li Xin,
2009, 46(10):  1626-1633. 
Asbtract ( 397 )   HTML ( 0)   PDF (1524KB) ( 471 )  
Related Articles | Metrics
In applications such as online games, online data services and wireless sensor networks, there is a need for large-scale servers to handle high volume of soft real-time requests. Such clusters must adaptively control the CPU utilization of heterogeneous processors in order to maintain desired performance and prevent system overload in face of unpredictable workloads. According to this, the authors propose a new method named heterogeneous distributed utilization control with load balancing (HDUC-LB) to handle unpredictable workload and improve system QoS in heterogeneous real-time clusters. HDUC-LB mainly includes two parts. First, it presents a new load balancing model for heterogeneous real-time clusters to handle overload of the system. Then, a utilization control method which is based on feed-back theory is proposed to enforce every desired utilization bounds of each processor. Simulation results show that HDUC-LB can provide a robust utilization control and ensure load balancing between different processors in the open and unpredictable environment, and both the load balancing model and the utilization control method can work effectively and robustly in large-scale clusters.
Memory Request Queue of Multi-Core Multi-Threading Processor for Real-Time Stream Processing
Tian Hangpei, Gao Deyuan, Fan Xiaoya, and Zhu Yian
2009, 46(10):  1634-1641. 
Asbtract ( 485 )   HTML ( 0)   PDF (1937KB) ( 462 )  
Related Articles | Metrics
In order to improve the bandwidth of DRAM, memory request queue inside the memory controller usually equips out-of-order scheduler, which affects the real-time stream processing of multi-core multi-threading processor. The authors propose a novel memory request queue to solve the problem based on studying typical structure of memory request queue. The scheduler of the new memory request queue dispatches memory request with out-of-order scheduler, but is controlled by window. Window defines the certain number of memory requests which are visible to scheduler. The scheduler receives memory request of new window only when finishing all of the requests belonging to the current window served. On the one hand, the out-of-order scheduling algorithm based on parallel execution can improve the memory bandwidth effectively, and on the other hand, the window constrains the largest memory request delay occurring in the worst case by preventing the scheduler from postponing the execution of memory request for unlimited duration. The simulation shows that the memory request queue proposed can adjust the largest memory request delay with scalability and the hardware cost is small. Compared with general out-of-order memory request queue, this presentation not only retains the memory bandwidth, but also supports real-time stream processing, thus proposing a new way of real-time computing with multi-core multi-threading processor.
Dynamically Clustering Localization Mechanism for Wireless Sensor Networks
Zhou Quan, Zhu Hongsong, Xu Yongjun, Luo Haiyong, and Li Xiaowei
2009, 46(10):  1642-1650. 
Asbtract ( 565 )   HTML ( 0)   PDF (1484KB) ( 445 )  
Related Articles | Metrics
A new localization method is proposed for wireless sensor networks. The method uses iterative localization algorithm and distributed clustering scheme to decrease generated communication packets and increase the locating accuracy. The clustering scheme, named virtual cluster shift (VCS), makes self-organized sensor nodes of a wireless sensor network dynamically build one or more virtual clusters to localize and track mobile targets. When one or more sensors detect a target, a sensor node is selected to be the cluster head. The cluster head is in charge of organizing neighboring sensors to form a virtual cluster, fusing data and estimating the targets states such as location, velocity of mobile target. When the target is moving out of the monitor area of current virtual cluster, the cluster head gives place to one of his cluster member who is the nearest to the target. The new cluster head forms his virtual cluster by replacing some of the old cluster members. A mean shift based algorithm is used to iteratively estimating the targets position using data gathered inside virtual cluster. An optimal weight of the kernel function is given to minimize the variance of estimation. Analysis and simulation results show that the communication overhead and jitter of the proposed algorithm is less than 13 of the centralized algorithm.
Performance Analysis of Multi-Channel MAC Schemes Based on 802.11
Mao Jianbing, Mao Yuming, Leng Supeng, and Bai Xiang
2009, 46(10):  1651-1659. 
Asbtract ( 547 )   HTML ( 1)   PDF (1245KB) ( 635 )  
Related Articles | Metrics
In order to improve the throughput performance of medium access control (MAC) schemes in wireless communication networks, some researchers proposed to split the single shared channel into two sub-channels: a control sub-channel and a data sub-channel. The control sub-channel is used for access reservation to the data sub-channel over which the data packets are transmitted. In this paper, an analytical framework is presented to evaluate the maximum achievable throughput of a class of generic multi-channel MAC schemes that are based on the RTS/CTS (ready-to-send/clear-to-send) dialogue and on the backoff contention resolution mechanism proposed in 802.11 DCF(distributed coordination function). By making use of a discrete Markov chain model for the backoff mechanism, the interval time of successfully transmitted RTS/CTS on the control sub-channel is obtained, and then it is applied to derive the saturation throughput of the multi-channel MAC schemes, which is closely related to the bandwidth allocation ratio between the control sub-channel and the data sub-channel. Moreover, the influence of the number of stations, the packet size, and the minimum contention window on the optimal bandwidth allocation ratio is investigated. Compared with the performance of the corresponding single channel MAC scheme that sends RTS/CTS packets and DATA packets on a single shared channel, the experiment results show that the multi-channel MAC schemes can only bring some performance enhancement. But under the condition that RTS/CTS are transmitted in full channel rate, the single channel MAC scheme can out-perform the multi-channel MAC schemes.
A Practical Data Possession Checking Scheme for Networked Archival Storage
Xiao Da, Shu Jiwu, Chen Kang, and Zheng Weimin,
2009, 46(10):  1660-1668. 
Asbtract ( 518 )   HTML ( 0)   PDF (1145KB) ( 479 )  
Related Articles | Metrics
Data possession checking (DPC) is used in networked archival storage to check in real time if the remote server holds a file intact before the actual access to the file occurs. The authors present a practical DPC scheme. In a challenge-response protocol, the checker ascertains the possession of a file by asking the server to compute a hash value of some randomly appointed data blocks of the file and return it together with a corresponding verification block. With this random sampling verification method, the computational and communication overheads of possession checking are reduced while a sufficiently high confidence level is obtained. A challenge renewal mechanism based on verification block circular queue is also proposed to allow the dynamic increase of the number of effective challenges which can be issued by the checker. Analysis shows that the storage overhead on the checker side and the communications overhead between the checker and the server are constant. Experimental results show that the computational overhead of a check with a confidence level of 99.4% is 1.8ms, which is negligible compared with the cost of disk I/O; The computational overhead of file preprocessing is reduced by three orders of magnitude by avoiding using public-key cryptosystem.
Security Evaluation for Inter-Domain Routing System in the Internet
Liu Xin, Wang Xiaoqiang, Zhu Peidong, and Peng Yuxing
2009, 46(10):  1669-1677. 
Asbtract ( 532 )   HTML ( 0)   PDF (1172KB) ( 419 )  
Related Articles | Metrics
For lack of effective security mechanism, the inter-domain routing system of the Internet, based on the border gateway protocol (BGP), faces serious security threats. Although many current researchers have conducted exhaustive research regarding the routing security problem of the BGP routing system in the Internet, few people quantify its security situation. Moreover, Internet network operators do need useful information to perceive the security status of their autonomous systems. In order to solve the problem, the authors analyze the hierarchical characteristics of the Internets inter-domain routing system, and propose a security evaluation model which makes use of anomalous BGP routes. Based on the route status tree exploited from hierarchical characteristics implicated in the BGP routing system, the model can describe the hierarchical relationship of various routing entities in it, store and record the security states of routes for every routing entity. Finally, the model can compute the routing security state of every entity according to the detected anomalous BGP routes. The experimental results show that the model can assess the security threat status of BGP routers, autonomous systems and the inter-domain routing system all together, and can provide valuable, intuitional curve for Internet network operators. The model has been applied to the BGP monitoring system.
Self-Regeneration Based Method for Topology Control with Intrusion Tolerance in Wireless Sensor Networks
Wang Liangmin, and Ma Jianfeng
2009, 46(10):  1678-1685. 
Asbtract ( 421 )   HTML ( 2)   PDF (1942KB) ( 371 )  
Related Articles | Metrics
Two key problems of topology controlling are pointed out for unattended wireless sensor networks which are deployed in hostile region. Firstly, how to get an intrusion-tolerant architecture with intrusion in existence; Secondly, how to sustain a connected network when all deployed nodes will be exhausted ultimately. To solve these two problems, A novel approach for topology control is presented, which disassembles into three phases, such as topology discovery, topology update and topology regeneration. In topology discovery phase, a tricolor based method is proposed to build architecture with high tolerance ability, in which black nodes are selected as cluster and grey nodes are in dormancy, so the networks are sparse and tolerant. In update and regeneration phases, the newly deployed nodes are regarded as renewable resource to replace the failure nodes, fill in the consumed energy, enhance the debased tolerance, and prolong network life. The topology constructed by this approach is proved to be intrusion tolerant. Some attributes of the integrated method are shown by simulations, such as the parameter of distribution of cluster head, the relation between the density of nodes and the number of clusters, and the advantage of replenishing residual energy. Finally, the differences of that from related work are given by comparison.
A Security Function Test Suite Generation Method Based on Security Policy Model
Zhang Min, Feng Dengguo, and Chen Chi
2009, 46(10):  1686-1692. 
Asbtract ( 586 )   HTML ( 7)   PDF (907KB) ( 630 )  
Related Articles | Metrics
The third-party independent security function testing is one essential step of the security evaluation of security products. Generally the test case generation of independence testing is based on the product specification. However, in the independence testing of security products such as the secure database management system (SDBMS),the product must satisfy the requirement of the security policies in addition to the requirement of the product specification, which describes the objects and the measurement of the protection. Since the behaviors of security products are more precisely described in the security models instead of the specifications, the authors provide a test case generation method based on the formal security policy model. The method include the generation of the test specification based on the formal security policy model, the test space partitioning based on both the grammar and rules; partitioning rule based on the type and the combination principles. The method is more likely to find the fault and error in the product than in manual testing, and it helps the automation of testing and improves the efficiency.
A Feature Weighting Scheme for Text Categorization Based on Feature Importance
Liu He, Liu Dayou, Pei Zhili, and Gao Ying,
2009, 46(10):  1693-1703. 
Asbtract ( 615 )   HTML ( 8)   PDF (2160KB) ( 590 )  
Related Articles | Metrics
Text categorization is one of the key research fields in text mining. Feature weighting is an important problem in text categorization. For computing feature weights, a feature weighting scheme for text categorization is proposed. In this scheme, the feature importance is defined based on the real rough set theory. By this concept, decision-making information of a feature for categorization is introduced into the weight of this feature. Then, the experiments are performed on two international and standard text datasets, namely, Reuters-21578 Top10 and WebKB. Through the computation of the total within-class scatter and between-class scatter in Fisher linear discriminant, it is verified that the proposed scheme can decrease the total within-class scatter and increase the between-class scatter; that is to say, the scheme can make samples in the same class more compact and those in different classes looser for the two datasets. Thereby, the proposed scheme can improve the space distribution of samples and simplify the mapping relation from samples to classes. Finally, the proposed scheme is evaluated on the two datasets by Nave Bayes, kNN and SVM classifiers. The experimental results show that the scheme can enhance the precision, recall and the value of $F$\-1 for categorization.
A Review of the State-of-the-Art of Research on Large-Scale Corpora Oriented Language Modeling
Luo Weihua, Liu Qun, and Bai Shuo
2009, 46(10):  1704-1712. 
Asbtract ( 560 )   HTML ( 3)   PDF (1107KB) ( 888 )  
Related Articles | Metrics
N-gram language model (LM) is a key component in many research areas of natural language processing, such as statistical machine translation, information retrieval, speech recognition, etc. Using higher-order models and more training data can significantly improve the performance of applications. However, for limited resources of the systems (e.g., memory, usage of CPU, etc), the cost of training and accessing large-scale LM becomes prohibitive with more and more monolingual corpora available. Therefore, the research on large-scale language modeling draws more attention. The authors introduce the state-of-the-art of the ideas and progress of the issue, which focuses on some representative approaches, including an ad hoc method, a randomized representation model and a distributed parallel framework. The ad hoc method is a unified one integrating division and conquering of data, compact data structrue, data compression based on quantization and memory mapping. The randomized representation of LM is a lossy compression model based on Bloom filter. The distributed parallel framework carries out the training of LM based on MapReduce and performs the requests of N-grams in a batch mode of remote call. The performance of systems of statistical machine translation utilizing the approaches is described respectively with experiments, and finally pros and cons are compared.
A New Method to Compute Semantic Orientation
Du Weifu, Tan Songbo, Yun Xiaochun, and Cheng Xueqi
2009, 46(10):  1713-1720. 
Asbtract ( 525 )   HTML ( 1)   PDF (1380KB) ( 519 )  
Related Articles | Metrics
At present, people have ever-increasing preference for the Internet for expressing their personal experiences and opinions on almost anything at review sites, forums, discussion groups, blogs, etc. Those user-generated content contains very valuable emotional information. How to mine those emotional information automatically and efficiently will hence be a very challenging question, as well as be promising in applications and development of enterprise business intelligence and public opinion survey and so on. Text-leveled sentiment analysis technology is considered as an extension and enhancement of traditional topic detecting and tracking (TDT) technology by adding some new approaches and ideas, which is based on word semantic orientation computing. In this paper, a novel scalable word semantic orientation computing framework is proposed, in which the word semantic orientation computing is transformed into the function optimization. As an instance of the proposed framework, the authors build an undirected graph in the use of word similarity computing technology first, and then partition the word-to-word graph by the idea of ‘minimum-cut’, thereby function optimization is adopted in this word semantic orientation computing framework and resolved by using simulated annealing algorithm. The experimental results prove that the proposed framework is reasonable and the algorithm performs better than those existing counterparts.
A Survey of XML Stream Management
Yang Weidong and Shi Baile
2009, 46(10):  1721-1728. 
Asbtract ( 570 )   HTML ( 0)   PDF (1362KB) ( 510 )  
Related Articles | Metrics
XML stream management system fits a large class of new applications such as publishsubscribe system, network monitoring systems, and the extensible markup language has become the de-facto standard for data representation and exchange of Web data. Therefore, there have been a hot spot in the area of data steam research recently. Different from traditional XML database management systems, an XML stream system aims to provide fast, on-the-fly matching of XML-encoded data to user’s query. It usually involves handling the XML stream coming online at any moment and any order, and requiring timely response without incurring more memory cost. Because that XML stream is nested and recursive and user’s interests are represented by XML query languages such as XQuery or XPath, XML stream management system is very different from relational data stream (i.e. tuple based stream) system. In this paper, a comprehensive overview over researches relevant for XML stream management system is presented, the characteristics of XML stream management are pointed out; typical XML stream management systems are compared, existing approaches for processing XML stream are discussed and analyzed including the approaches based on automaton, index and sequence; optimizing techniques for processing XML streams are described. Also, the further researches are pointed out.
On the Luminance Overflow in Spread Spectrum Robust Image Watermarking Schemes
Zhao Qiyang and Yin Baolin
2009, 46(10):  1729-1736. 
Asbtract ( 401 )   HTML ( 2)   PDF (1781KB) ( 491 )  
Related Articles | Metrics
In spread spectrum robust image watermarking schemes, images are watermarked in frequency domains and transformed back to spatial domains. Different from unlimited frequency coefficients, the volumes in spatial domains are usually limited, therefore luminance overflow frequently occurs on some pixels in watermarked images especially for those with high average luminance. In all existing solutions to this problem, noticeable decreases are generally introduced into the results of watermark detections, and the robustness of watermarking schemes is tampered severely as a result. However, since any operations in frequency domains would be diffused to all pixels, it is hard to investigate the effect of luminance overflow on watermark detections in frequency domains. The authors propose an approximating calculation of detection values in spatial domains for spread spectrum watermarking schemes. With the proposed approximating method, it is shown that the detection values are significantly decreased for images with higher average luminance and fewer textures. Finally a new self-adaptable solution is established to adjust the luminance of watermarked images in a block-wise style, where two adjusting factors are calculated to obtain the best fidelity and robustness. Experiments and analysis show that the new solution is effective improving the fidelity and robustness of object watermarking schemes.
An Estimation Algorithm of the Axis of Rotation from 3D Cloud Data Based on Line Element
Zhang Liang, and Jiang Xiaofeng
2009, 46(10):  1737-1742. 
Asbtract ( 470 )   HTML ( 0)   PDF (1448KB) ( 429 )  
Related Articles | Metrics
Because of the advantages on accuracy and expeditiousness in recent CAD modeling and reverse engineering, feature-based surface reconstruction becomes a hot topic in reverse engineering. Especially, 3D shape recognition and feature extraction is the difficulty and key point of the feature-based reverse engineering. To improve the performance and robustness, a new method for the estimation of the axis of rotation based on line element geometry, line complex and kinematic equation is presented. The method first projects the data points of a 3D volume to a line element image-space. A linear complex of the line elements is built and then the axis is approximated by fitting a kinematic equation on it. It is more efficient and faster than the previous methods because it doesnt need to estimate accurate surface normal vector of the cloud data. A new discrete region growing technique, called K-Local-RANSAC, is adopted to exclude the influence of the outer points and ensure the robustness. Line element geometry is employed to effectively perform the presentation of complex objects according to surface type. The K-Local-RANSAC algorithm uses linear complex to perform the classification. The discrete region growing mode improves the efficiency obviously. However, the effectivity on general rotational surface including torus and noise and fragment cloud points has been proved by experiments.
A Denotational Semantic Description for Safe Ambient Calculus
Lü Jianghua and Ma Shinglong
2009, 46(10):  1743-1749. 
Asbtract ( 480 )   HTML ( 0)   PDF (678KB) ( 402 )  
Related Articles | Metrics
Since computation under network involves variants, complicated information and behaviors, it is difficult to give denotational semantic description for computing under network. As a kind of network computing model, Ambient calculus contributes to modeling mobility and distributed features of network computing. Researches on its semantics description are mostly based on reduction rules. These descriptions are brief, but based on them it is indeterminable and has disadvantage in system design and direct implementation. On the other hand, these works mostly focus on describing variants behaviors of Ambient calculus in one level, which makes it complicated to understand clearly how these variants behaviors work and how they work together. As a result, based on the unit ambient of Ambient calculus, a three-level semantics description framework is given for Ambient calculus. The three levels are: internal level of ambient, ambient level, and system level. For each of these three levels, the syntax and semantics of its behaviors is defined in detail, and the transformation relation functions between each two levels are given. In order to specify how the framework works, a practical example is given based on this framework in three levels. Finally, proof is shown to prove the correctness of this semantic framework. This layered semantics description may help users analyze behaviors of computing in Ambient calculus and understand the implementation of ambient practical systems.
Model Checking for Mobile Ambients
Jiang Hua, and Li Xiang
2009, 46(10):  1750-1757. 
Asbtract ( 417 )   HTML ( 0)   PDF (772KB) ( 408 )  
Related Articles | Metrics
Mobile ambients is a model tool that describes distributed moving calculation. It consists of processes and nested sub-mobiles, and it has strong ability of expressing space. Ambients logic can describe the processs evolvement in time and space. Model-checking of ambients logic is a hotspot field of model-checking today. The authors study the model-checking of finite-control mobile ambients against formulas of predicate ambient logic with recursion. Before now, Lin H M concluded that this problem is able to be judged and proposed a detailed algorithm for it, but not considered much on the efficiency of the algorithm. In this paper, a local model-checking algorithm is developed by applying the nested predicate equations. To the knowledge of the authors, this is the first algorithm whose time complexity increases exponentially with the alternation depth in formula, and this is the second model-checking algorithm for predicate ambient logic with recursion after Lin H M’working. This papers contributions are: ①studying the semantic consistency between the ambient logic formula and nested predicate, proving the equivalence between them, and getting a detailed algorithm for converting the ambient logic formula into nested predicate equations; ②studying the model checking for ambient logic, getting a general local algorithm and analyzing the complicacy of it.
A Stream Checking and Prefetching Algorithm Based on Page Level Stream Buffer Architecture
Liu Li, Chen Mingyu, Bao Yungang, Xu Jianwei, and Fan Jianping
2009, 46(10):  1758-1767. 
Asbtract ( 460 )   HTML ( 0)   PDF (3110KB) ( 366 )  
Related Articles | Metrics
Proposed in this paper is a VSS (variable stride stream) algorithm to improve the memory access characteristics. A prefetching algorithm based on VSS to optimize memory performance is presented on a page level stream buffer architecture. With VSS, the problem of virtual address jumping in cycle access is resolved that can improve the stream covering rate. When time stride is less than remote delay, the prefetching cant be accomplished in time. The stream prefetching algorithm optimized by time stride can dynamically adjust prefetching length that can improve prefetching performance. The VSS prefetching algorithm has higher accuracy and lower communication cost in contrast with sequence prefetching algorithm. The performance of the architecture and prefetching algorithm is evaluated through a performance model. The application slowdown caused by remote memory is evaluated through the model based on memory access traces such as Linpack and SPEC2000. The results show that with the help of cache and prefetching engine, for most applications having regular memory access patterns, the performance is similar to that on full memory configurationon high-speed network. So it is feasible to build flexible extended remote memory architecture to break the memory capacity restriction for some memory-bound applications with a little performance decrease and the memory can be extended easily and unlimitedly.
A Multiple String Matching Algorithm Based on Memory Optimization
Liu Yanbing, Liu Ping, Tan Jianlong, and Guo Li
2009, 46(10):  1768-1776. 
Asbtract ( 588 )   HTML ( 0)   PDF (1237KB) ( 655 )  
Related Articles | Metrics
Multiple string matching algorithms play a fundamental role in many network security systems, such as intrusion detection and prevention systems, anti-virus systems, anti-spam systems, firewall, etc. It has been observed that the memory space usage and cache locality of automata are critical factors affecting multiple string matching algorithms searching speed. As the pattern set size grows larger and larger, classical multiple string matching algorithms suffer from great performance degradation because of the massive storage usage of string matching automata. The authors propose optimization strategies for the classical string matching algorithm SBOM to reduce its automata size and improve its cache locality, which results in a great promotion in searching speed. More specifically, the Factor Oracle of SBOM algorithm is first replaced with a suffix tree structure, and then the rarely accessed automata nodes are removed through the pruning method to reduce suffix tree to nearly linear space complexity, and finally the pruned suffix tree is represented with double-array trie structure to compress its memory space. Compared with SBOM, this algorithm can greatly reduce memory usage and improve searching speed. Experiments on random data sets show that the proposed algorithm uses memory less than 5% of SBOM and achieves 100% performance improvement over SBOM algorithm. The proposed algorithm is especially suitable for high-speed online pattern matching.