Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 June 2011, Volume 48 Issue 6
Paper
Advances in the Component Verification Technology Based on Model Checking
Jia Yangli, Li Zhoujun, Xing Jianying, and Chen Shikun
2011, 48(6):  913-922. 
Asbtract ( 371 )   HTML ( 0)   PDF (1134KB) ( 490 )  
Related Articles | Metrics
Due to its high automation and completeness, and strong complementarity with component technology, model checking plays an increasingly important role in the analysis and verification of trustworthy properties of component-based systems. Therefore, model checking technology is applied more and more widely to ensure the high trustworthy properties of safety critical applications, and gradually penetrates into the development process of component-based systems. It has obviously blocked the development of model checking based component verification that there is no review work appeared on the topic. As a result, presented in this paper is a review of analysis and discussion on the research progresses of model checking based component verification methods in recent years. In this paper, the model checking based component verification methods are classified into system specification model based and source code based verification methods, and the states of the art of these methods are summarized in detail. The relationship of model checking and component trustworthy properties verification is discussed firstly. Then SOFA, Fractal, CORBA and other component model based verification methods, and static and dynamic source code based verification methods are summarized respectively. The main challenges and future development directions of model checking based component verification technology are pointed out finally.
Survey of Software Tamper Proofing Technique
Wang Chaokun,, Fu Junning, Wang Jianmin,, and Yu Zhiwei
2011, 48(6):  923-933. 
Asbtract ( 750 )   HTML ( 9)   PDF (2791KB) ( 666 )  
Related Articles | Metrics
With the wide use of computer technologies, software has become indispensable in our daily life and the corresponding security issues in software systems are more and more prominent. Especially, how to design a practical protection scheme is quite important and has great significance in the software research and development industries. As one of the key methods for software protection, the software tamper proofing technique attracts much attention from researchers both at home and abroad in recent years. Such technique aims at preventing the critical program information from the unauthorized modifications and uses, and also at generating the responses once the tampering is detected. Presented in this paper is a review of the analysis of software tamper proofing. In our discussion, different tamper proofing methods are classified into two categories: the static tamper proofing methods based on the code obfuscation as well as the dynamic tamper proofing methods based on the verification-response. The advantages and disadvantages, strengths and weaknesses of these methods are presented in detail. In the end, through the survey of these tamper proofing techniques, a summary is obtained which includes not only the characteristics, but also the existing problems and future work of the software tamper proofing technique.
Trust Evaluation Model Based on Node Behavior Character
Tian Junfeng, Du Ruizhong, and Liu Yuling
2011, 48(6):  934-944. 
Asbtract ( 414 )   HTML ( 1)   PDF (1278KB) ( 585 )  
Related Articles | Metrics
Because of its openness, diversity and complexity, the network, trustiness faces the severe challenge. In order to guarantee the credibility of the whole system, it is very essential to establish a valid management model based on trusted relationship among nodes. At the same time, the new emerging computing model based on independent cooperation is considered, which, will influence the development trend of network node behavior. Based on the trustiness of network node behavior, this paper abstractly makes the network system as the manager monitor agent (MMA) with hierarchical structure based on trustworthy networks and agent technique. Based on all those above, the paper mainly studies the logical components, function and strategy in MMA model. Through analyzing various factors influencing the node behavior, the paper describes a new method and strategy which can monitor and measure the node behavior on the expectancy behavior character for trusted computing of the node. According to the manifestation of node behavior and historical information, the trust-value of node behavior is adjusted and predicted, and the trusted-interval is defined to realize the organization and management of the trusting relationship among nodes. The paper then establishesa dynamic trust evaluation model based on node behavior characters. Finally, the experimental results shows the effectiveness and feasibility of the model.
A Hierarchical Evaluation Approach for Network Security Based on Threat Spread Model
Chen Feng, Liu Dehui, Zhang Yi, and Su Jishu
2011, 48(6):  945-954. 
Asbtract ( 532 )   HTML ( 2)   PDF (1593KB) ( 769 )  
Related Articles | Metrics
Network system is generally faced with invasion of the external and internal threat agents. Moreover, threat agents have the capability of spreading threats via the interrelation among vulnerabilities and components in the network, bringing about potential threats. Designing a reasonable model to identify, analyze and quantitatively measure the consequences resulting from potential threats is one of the main challenges that the research of network security evaluation faces. For this issue, a hierarchical evaluation approach based on the threat spread model for the network security is proposed. Firstly the threat spread model is put forward to identify the threat agents, analyze the spread paths of threats, and predict potential threats. The threat spread model includes target network model, threat agent model, threat spread graphs and threat spread algorithm. On this basis, the security measure model is presented to compute the danger indexes of services, hosts and network system respectively. The security measure model is composed of spread graphs, metrics, metric computing functions and index computing functions. Based on the novel approach, the prototype system is implemented and applied by an enterprise local network system. The result demonstrates the correctness of the threat spread model and the advantage of the approach compared with traditional methods.
Cache Based AES Attack Implementation and Its Theoretical Analysis
Zhang Suiyu, Han Jun, Lu Shiting, and Zeng Xiaoyang
2011, 48(6):  955-963. 
Asbtract ( 601 )   HTML ( 0)   PDF (1146KB) ( 356 )  
Related Articles | Metrics
This paper proposes a highly efficient cache-based timing attack method against AES as well as other cryptographic algorithms running on SoC platforms. It is available due to the leaking information of cache behavior which can be actually observed during AES execution and is implemented based on table lookups for performance enhancement. We can completely confirm the 128 b cipher key by searching the statistical relationship between the cipher key and encryption timing during the first two rounds. Compared with the known means, our method is much easier to carry out and more robust under noisy environments caused by hardware and software interference. Additionally, by introducing the notion of sample number needed for a successful attack which denotes the strength of cryptographic algorithm, we present an analytical model based on statistical differential timing analysis. Through this model we could find out that different attacking strategies as well as system noise and some other factors exert very different influence on necessary sample number. Using our method, we have successfully compromised AES on several SoC platforms and verified the analytical model on MIPS4kc SoC platform with Linux2.4. By studying this analytical model, some common features of cache-based timing attacks have been deduced, and countermeasures are proposed therefore.
Filtering Intrusion Forensic Data Based on Attack Signatures
Fu Xiao, and Xie Li
2011, 48(6):  964-973. 
Asbtract ( 316 )   HTML ( 1)   PDF (1136KB) ( 504 )  
Related Articles | Metrics
Computer forensics is a new field on computer evidences process. This field is very important and practical, so it has drawn more and more attention in recent years. Intrusion forensics is a specific area of computer forensics, and has been applied to computer intrusion activities. It is a hot area because a large proportion of the computer crimes are intrusion activities. When investigating intrusion activities, one key step is obtaining intrusion evidences. In order to get this kind of evidences automatically, an attack-signature-based method for filtering intrusion forensic data is proposed. It mainly includes the following steps: Firstly, the detail behaviors of the attack being investigated are reconstructed based on its attack signatures; Then the attack features which are required by the filter are extracted from these details; Finally, according to the similarity between attack features and candidate data, all evidences related to the attack being investigated can be gotobtained. The experiment results on DARPA 2000 have proved that our method has high accuracy and its completeness is almost 100%. Compared with current methods, our method shows more advantages. For example it needs little manual work and can process more complex intrusion scenarios. Moreover, it has higher performance and can find more types of evidences.
A Fine-Grain Trust Model Based on Domain and Bayesian Network for P2P E-Commerce System
Tian Junfeng and Tian Rui
2011, 48(6):  974-982. 
Asbtract ( 443 )   HTML ( 0)   PDF (1196KB) ( 567 )  
Related Articles | Metrics
Trust model has been widely applied to the field of P2P e-commerce, but the trust grain of the current trust model of P2P e-commerce is too rough to embody the peers real behavior very well. In order to solve the problem, a novel fine-grain trust model (FG-trust) is presented, which can complete the trust computation of one peer in different domains and different aspects. The model adopts the technology of multi-agent, uses the agent to manage and maintain the network and divides the network into multiple domains. Each domain sets a domain-agent to manage the peers in it and a manager-agent is set in the entire network to manage the messaging between inter-domains. In order to measure the trust value of one peer in any domain and take full account of the impact of different relationships between domains on recommendation trust, the concept of domain-model is introduced into FG-trust. The model uses Bayesian network to compute the peers trust value, which can predict the trust value of one peer in any aspects. And an updating method of Bayesian network model and domain-model is proposed, which can constrain the peers malicious behavior effectively. Finally, the effectiveness and feasibility of this model are illustrated by the experiments.
An Environment-Adaptive Role-Based Access Control Model
Wu Xinsong, He Yeping, Zhou Zhouyi, and Liang Hongliang
2011, 48(6):  983-990. 
Asbtract ( 418 )   HTML ( 0)   PDF (896KB) ( 498 )  
Related Articles | Metrics
Large scale network-based applications, such as infectious diseases reporting system, require that access control policy can be changed according to environment alternation. However, existing access control models are inflexible and can not be adapted to environment alternation because they are lack of mechanisms to capture environment alternation and to change access control policy. In this paper, we analyze the access control requirements of infectious diseases reporting system. Based on the analysis, we extract the general access control requirements of large scale network-based applications. Through extending RBAC model, we design the components of the environment-adaptive role-based access control model called EA-RBAC and give the formal definition of the model. Compared with traditional RBAC models, EA-RBAC model adds event-trigger, event-based equivalent states transition, environment role and virtual domain mechanisms. Through event-trigger and equivalent states transition, the system can perceive environment alternation and transit state based on environment alternation. Through environment role and virtual domains, the system can dynamically adjust environment role and user authorization based on current state. EA-RBAC model can enforce flexible access control policy for large scale network-based applications while holds security. Also, as an example, this paper gives the applicability analysis of EA-RBAC model in infectious disease reporting system.
A Congestion Aware Routing Protocol Based on Network Coding in Wireless Sensor Networks
Fu Bin, Li Renfa, Liu Caiping, and Xiao Xiongren
2011, 48(6):  991-999. 
Asbtract ( 478 )   HTML ( 0)   PDF (1987KB) ( 564 )  
Related Articles | Metrics
Event driven wireless sensor networks (WSNs) have the characteristics of traffic bursting which leads to the congestion in local area. Thus, the data transmission reliability is deeply affected. At the same time, due to the scarce resources like energy, computational capability and storage space as well as rapid change in wireless link characteristics such as signal strength, interference, and multi-path propagation, how to provide a reliable data transmission in WSNs is an important and challenging issue. The mechanism of congestion aware routing is combined with network coding in this paper. A method of region congestion detecting is proposed. Based on neighbor nodes congestion degree, a formulation to calculate the congestion of this local area is given, which can indicate the congestion degree of the local area earlier and more accurately. And an on-demand congestion aware routing protocol based on network coding is proposed. During the routing discovery procedure, whether a node is selected to deliver data is according to its area congestion degree. The number of coding packets is properly estimated according to the link reliability. Simulation results in NS2 show that the congestion is avoided appropriately and the data delivery rate is improved. By reducing the effect of link invalidation, data transmission reliability is improved in wireless sensor networks.
CAMR: Network Coding-Aware Multicast Routing Protocol in Wireless Mesh Networks
Huang Chuanhe , Yang Wenzhong, Wang Bo, Zhang Zhenyu, and Xu Liya
2011, 48(6):  1000-1009. 
Asbtract ( 510 )   HTML ( 1)   PDF (1589KB) ( 560 )  
Related Articles | Metrics
Network coding is a new paradigm for improving the network throughput, which can achieve the network multicast capacity. It is significant for further availability of Mesh networks to apply network coding to Mesh networks. Network coding-aware routing protocol is a protocol that can fully identify and take advantage of the network coding opportunities in wireless network. Although there are a few network coding based unicast routing protocols, the network coding opportunities in wireless network can not be sufficiently identified and utilized by them. Research and applied design of coding-aware routing protocol in wireless network is also a hot topic. In fact, there are few coding-aware multicast routing protocols so far. In this paper, a coding-aware multicast routing protocol(CAMR) is proposed. CAMR exploits a novel routing metric called coding-aware routing metric(CAM) to measure the network coding opportunity and coding capability of Mesh node. The proposed protocol CAMR based on metric CAM can fully discover and leverage the existing network coding opportunities of the Mesh networks. As a result, the multicast throughput of Mesh networks is further enhanced. Extensive simulation results show the advantage and the validity of performance of CAMR protocol.
An Identification Method Combining Data Streaming Counting with Probabilistic Fading for Heavy-Hitter Flows
Li Zhen, Yang Yahui, Xie Gaogang, and Qin Guangcheng
2011, 48(6):  1010-1017. 
Asbtract ( 518 )   HTML ( 0)   PDF (2308KB) ( 498 )  
Related Articles | Metrics
Identifying heavy-hitter flows in the network is of tremendous importance for many network management activities. Heavy-hitter flows identification is essential for network monitoring, management, and charging, etc. Network administrators usually pay special attention to these Heavy-hitter flows. How to find these flows has been the concern of many studies in the past few years. Lossy counting and probabilistic lossy counting are among the most well-known algorithms for finding Heavy-hitters. But they have some limitations. The challenge is finding a way to reduce the memory consumption effectively while achieving better accuracy. In this work, a probabilistic fading method combining data streaming counting is proposed, which is called PFC(probabilistic fading counting). This method leverages the advantages of data streaming counting, and it manages to find the heavy-hitter by analyzing the power-low characteristic in the network flow. By using networks power-law and continuity, PFC accelerates the removal of non-active and aging flows in table records. So PFC reduces memory consumption, and decreases false positive ratio too. Comparisons with lossy counting and probabilistic lossy counting based on real Internet traces suggest that PFC is remarkably efficient and more accurate. Particularly, experiment results show that PFC has 60% lower memory consumption without increasing the false positive ratio.
Collaborative Classification Mechanism for Privacy-Preserving
Zhang Zhancheng, Wang Shitong, and Fu-Lai Chung
2011, 48(6):  1018-1028. 
Asbtract ( 404 )   HTML ( 0)   PDF (1813KB) ( 458 )  
Related Articles | Metrics
Privacy-preserving is becoming an increasingly important task in the Web-enabled world. Specifically we propose a novel two-party privacy-preserving classification solution called collaborative classification mechanism for Privacy-preserving(C\+2MP\+2) that is inspired from mean value and covariance matrix globally stating data location and direction, and the fact that sharing those global information with others will not disclose ones own privacy. This model collaboratively trains the decision boundary from two hyper-planes individually constructed by ones own privacy information and counter-partys global information. As a major contribution, we show that C\+2MP\+2 can protect both data-entries and number of entries. We describe the C\+2MP\+2 model definition, provide the geometrical interpretation, and present theoretical justifications. To guarantee the security of testing procedure, we then develop a testing algorithm based on homomorphic encryption scheme. Moreover, we show that C\+2MP\+2 can be transformed into existing minimax probability machine (MPM), support vector machine (SVM) and maxi-min margin machine (M\+4) model when privacy data satisfies certain conditions. We also extend C\+2MP\+2 to a nonlinear classifier by exploiting kernel trick without privacy disclosure. Furthermore, we perform a series of evaluations on both toy data sets and real-world benchmark data sets. Comparison with MPM and SVM demonstrates the advantages of our new model in protecting privacy.
A Feature Selection Method for TWSVM via a Regularization Technique
Ye Qiaolin, Zhao Chunxia, and Chen Xiaobo
2011, 48(6):  1029-1037. 
Asbtract ( 501 )   HTML ( 0)   PDF (1230KB) ( 734 )  
Related Articles | Metrics
Twin support vector machine (TWSVM for short), as a variant of GEPSVM, is based on the idea of generalized support vector machine (GEPSVM), which determines two nonparallel planes by solving two related SVMtype problems, such that its computing cost in the training phase is 1/4 of standard SVM. In addition to keeping the superior characteristics of GEPSVM, classification performance of TWSVM significantly outperforms that of GEPSVM. However, the standalone method requires the solution of two smaller quadratic programming problems (QPPs) and there are few modifications of it that have been proposed to automatically select the input features. In this paper, through introducing a regularization term to the objective functions of TWSVM pair, the authors first propose a modification of TWSVM, called RTWSVM. Entirely different formulation from TWSVM, RTWSVM guarantees two QPPs are strong convex, implying that it can obtain the global but unique solution. Then, a feature selection method for RTWSVM is proposed, which is based on RTWSVM, and generates two planes directly from solving two sets of linear equations and requires no special optimization solvers. This method can obtain comparable classification performance to TWSVM, and have faster computational time, better suppression of input features and better ability to reduce the number of kernel functions.
Maximum Entropy Relief Feature Weighting
Zhang Xiang, Deng Zhaohong,, Wang Shitong, and Choi Kupsze
2011, 48(6):  1038-1048. 
Asbtract ( 691 )   HTML ( 0)   PDF (3375KB) ( 519 )  
Related Articles | Metrics
A latest advance in Relief feature weighting techniques is that it can be approximately expressed as a margin maximization problem and therefore its distinctive properties can be investigated with the help of the optimization theory. Although Relief feature has been widely used, it lacks a mechanism to deal with outlier data and how to enhance the robustness and the adjustability of the algorithm in noisy environments is still not very obvious. In order to enhance Relief’s adjustability and robustness, by integrating maximum entropy technique into Relief feature weighting techniques, the more robust and adaptive Relief feature weighting new algorithms are investigated. First, a new margin-based objective function integrating maximum entropy is proposed within the optimization framework,where two maximum entropy terms are adopted to control the feature weights and sample force coefficients respectively. Then by applying optimization theory, some of useful theoretical results are derived from the proposed objective function and then a set of robust Relief feature weighting algorithms are developed for two-class data, multi-class data and online data. As demonstrated by extensive experiments in UCI benchmark datasets and gene expression datasets, the proposed new algorithms show the competitive performance to the state-of-the-art algorithms and much better robustness to datasets with noise and/or outliers.
Uniform Design and Reconstructive BLX-α Based Scatter Search for Continuous Optimization Problem
Fan Tiehu, Qin Guihe, , and Zhao Qi
2011, 48(6):  1049-1058. 
Asbtract ( 529 )   HTML ( 1)   PDF (1281KB) ( 422 )  
Related Articles | Metrics
Scatter search is a newly emerging population-based evolutionary method that, unlike genetic algorithm, searches for global optima or satisfactory solutions by operating on a small data collection of intensification and diversification, and making much use of various systematic sub-method and limited use of randomization. Furthermore, scatter search uses improvement strategies to efficiently produce the local tuning of the solutions, and an extremely remarkable aspect concerning scatter search is the trade-off between the exploration abilities of the combination method and the exploitation capacity of the improvement mechanism. This paper proposes a novel uniform design and reconstructive BLX-α operator based scatter search algorithm (URBSS) to solve nonlinear continuous global optimizations based on the flexibility of scatter search, which enhances the diversification generation method used in some continuous scatter search before by introducing uniform design, and employs a reconstructive BLX-α operator, that is one of the most effective combination methods for real-coded genetic algorithms, as the solution combination method to form several offspring. Massive simulation experiments are carried out with eight popular-used testing functions, and comparison is performed against some other well-known continuous optimization algorithms presented in the literatures. From the computational results, the URBSS has proved to be quite effective and precise in identifying the global optima, and the global optimization ability and convergence rate are improved.
QoS-Aware Semantic Web Service Modeling and Discovery
Wan Changlin, Shi Zhongzhi, Hu Hong, and Zhang Dapeng,
2011, 48(6):  1059-1066. 
Asbtract ( 463 )   HTML ( 0)   PDF (1076KB) ( 410 )  
Related Articles | Metrics
for the modeling of QoS-aware semantic Web service, an upper-level ontology, named as WS-QMO, is proposed to describe and publish both QoS information and QoS requirement in a semantic way. By using WS-QMO, QoS attributes and metrics are defined by OWL concepts and roles, and the QoS constraints are defined by SWRL rules. As far as we know, the WS-QMO ontology satisfies most of the current requirements for QoS-based Web service description. In addition, due to the proper integration of SWRL rules, the syntax of WS-QMO is rather concise, and it has richer description capability on complex QoS attributes comparing to the existing Web service modeling ontology. Furthermore, WS-QMO based QoS knowledge base is used in our semantic Web service architecture, which has dynamic description logic as its logic foundation. In this knowledge base, the constraint verification problem can be solved as a reasoning problem of the description logic with SWRL rules. The main KB management mechanism is built on top of the algorithms for constraint verification. More important, this mechanism comes up with a new automated Web service discovery algorithm that supports complex QoS constraints. The discovery algorithm makes a balance between semantic description capability and computational complexity.
The Translation Mining of the Out of Vocabulary Based on Wikipedia
Sun Changlong, Hong Yu, Ge Yundong, Yao Jianmin, and Zhu Qiaoming
2011, 48(6):  1067-1076. 
Asbtract ( 437 )   HTML ( 0)   PDF (1569KB) ( 534 )  
Related Articles | Metrics
The query translation is one of the key factors that affect the performance of cross-language information retrieval (CLIR). In the process of querying, the excavation of the out of vocabulary (OOV) has the important significance to improve CLIRT. Out of Vocabulary means the words or phrase which cant be found in the dictionary. In this paper, according to Wikipedia data structure and language features, we divide translation environment into target-existence environment and target-deficit environment. Depending on the difficulty of translation mining in the target-deficit environment, we adopt the frequency change information and adjacency information to realize the extraction of candidate units, and compare common extraction methods of units. The results verify that our methods are more effective. We establish the strategy of mixed translation mining based on the frequency-distance model, surface pattern matching model and summary-score model, and add the model one by one, and then verify the function influence of each model. The experiments use the mining technique of OOV in search engine as baseline and then evaluate the results with TOP1. The results verify that the mixed translation mining method based on Wikipedia can achieve the correct translation rate of 0.6822, and the improvements on this method are 6.98% over the baseline.
An Efficient Ontology Ranking Algorithm—MIDSRank
Zhang Zhiqiang, Song Weitao, and Xie Xiaoqin
2011, 48(6):  1077-1088. 
Asbtract ( 501 )   HTML ( 0)   PDF (1328KB) ( 439 )  
Related Articles | Metrics
Using domain ontologies to represent knowledge has shown to be a useful mechanism and the format for managing and exchanging information. Ontologies are the backbone of knowledge representation for the semantic Web. Now, there are so many ontologies on the Internet. Due to the cost and the difficulty of building ontologies, a number of ontology libraries and search engines are emerging to facilitate reusing such knowledge structures. For reducing the cost of building ontology, people always search some candidates from the Internet firstly, and then integrate and refine these candidates to establish their own ontology. But, there are so many candidates that could be found, so the ontology ranking technique is becoming a hot research topic. This paper categorizes the existing ontology ranking methods into two types, and analyzes their characteristics. Then a new ontology ranking method is introduced, which combines the merits of existing algorithms together to improve the experience of ontology searching. It improves the AKTiveRank by selecting a new factor CIM to replace the factor CEM of AKTiveRank. The CIM is a measure to evaluate the importance of one concept in ontology. Finally, two experiments are designed to prove the efficiency of MIDSRank algorithm. The results show that the new method is more efficient.
A Tiered Storage System for Massive Data: TH-TS
Ao Li, Yu Deshui, Shu Jiwu, and Xue Wei,
2011, 48(6):  1089-1100. 
Asbtract ( 639 )   HTML ( 0)   PDF (1827KB) ( 602 )  
Related Articles | Metrics
With the rapid growth of storage scale, the key issues in building massive storage systems are to reduce the total cost of ownership (TOC) and to improve system performance. In this paper, the design and implementation of a tiered storage system, called TH-TS (Tsinghua Tiered Storage), are presented. In TH-TS, we design an approach for migrating files efficiently in tiered storage system (CuteMig), which promotes the files based on their promotion costs and benefits with the incremental scanning technique to acquire files access information and adaptively demotes files based on the free space of a high-end storage tier to ensure that the high-end storage system will have free space. It eliminates file replacement before promotion, and when files are promoted, it reserves their replicas in the low-end storage system. This approach solves the shortcomings of the traditional on-demand promotions and replacement demotion approaches, resulting in improved system performance and less amount of migrated data. The evaluation results reveal that TH-TS could effectively migrate files among different tiers, and reduce the average response time by 10% compared with LRU, and 39% compared with GreedyDualSize, and that TH-TS could reduce the amount of promoted data by 32% and 59%, and the amount of demoted data is also reduced by 47% and 66% compared with LRU and GreedyDualSize.
An Optimal Storage Strategy for Static Path Labeling Scheme
Chen Ziyang and Zhou Junfeng
2011, 48(6):  1101-1108. 
Asbtract ( 436 )   HTML ( 0)   PDF (1106KB) ( 391 )  
Related Articles | Metrics
By maintaining the information from root node to current node, the position relationship between nodes of an XML document can be determined efficiently by comparing their path labels, such that the overall performance of XML query processing can be improved significantly. Moreover, a good storage strategy for path labels can not only improve the utility ratio of disk space, but also reduce the costly IO operation. In this paper, an optimal storage strategy for static path labeling scheme is proposed to tackle this problem. The basic idea is that when storing the components of path labels, they are assigned with different prefixes according to the sum of frequencies of the region they belong to, thus can reduce the storage space efficiently. Compared with existing methods, the prefixes for components of path labels are not determined according to pre-specified prefixes, which are too inflexible to utilize the frequency information of different components to reduce the storage space. The experimental results verify the feasibility and effectiveness of the proposed storage strategy.