Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 August 2009, Volume 46 Issue 8
Paper
A Survey of Network Information Content Audit
Sun Qindong, Guan Xiaohong, and Zhou Yadong
2009, 46(8):  1241-1250. 
Asbtract ( 559 )   HTML ( 5)   PDF (1183KB) ( 610 )  
Related Articles | Metrics
Nowadays, large scale spreading of the pornographic and rumour content on the Internet has been a serious problem of network security. The authors give a survey of the main network information content audit techniques, which usually sniffers network packets from key point of network, filters packets to find the contents against security policy, and could effectively prevent the spreading of harmful contents. With the viewpoint from global to local and from bottom to top, the key issues in content auditing are discussed. Firstly, the current works of the auditing model are introduced and the main deficiencies of existing models are presented. Secondly, the techniques for auditing data capturing and load balancing are analyzed. Thirdly, the development of content analysis technologies are introduced, such as, pattern matching, text semantics analysis, hot topic extraction, malicious images recognition, etc. Fourthly, the methods of evaluation and prediction of the content security situation, and technologies of data online processing and blocking are discussed. Through analyzing the open problems and issues, which include streaming video content auditing, key words dynamic updating, analysis of dynamic information flow characteristics, and dynamic propagation prosess of network information, the directions and suggestions for further research are proposed as the concluding remarks.
A Cooperative Mechanism for Inter-Domain Routing Management
Hu Ning, Zou Peng, and Zhu Peidong
2009, 46(8):  1251-1259. 
Asbtract ( 837 )   HTML ( 0)   PDF (1309KB) ( 392 )  
Related Articles | Metrics
Inter-domain routing system is the fundamental infrastructure of Internet. It consists of many interconnected autonomous systems (ASes) which configure and operate their routing policy independently. The uncoordinated routing decision can cause various problems such as routing oscillations, routing security and traffic engineering failure. To detect and remove the routing policy conflict, multi-ASs cooperation is needed. For competitive purpose, ISP always keeps some information confidential such as routing policy, network topology and this requirement of privacy preserve hinders the cooperation among ISP. Due to the lack of effective coordinative mechanism for confidential information access, the cross-domain routing policy management can not be implemented. To improve the cooperative ability of ISP, the authors design a multi-AS-cooperation-oriented method for routing policy consistency analysis based on disperse logarithm hypothesis, which can detect the inconsistency existing among multi ASs routing policy without leaking the confidential information of policy. Compared with the solution based additively homomorphic asymmetrical encrypt function, this method need not an oblivious third party and has lower computing and communication cost. This method need not modify the BGP routing protocol, so it is easier to deploy and cheaper to implement. It can be used in many applications such as routing policy conflict detection, routing validation, routing monitor and cooperative intrusion detection.
IPSBSAR: An Incremental P2P Search Algorithm Based on Social Acquaintance Relationship
Zhu Guiming, Jin Shiyao, and Guo Deke
2009, 46(8):  1260-1269. 
Asbtract ( 517 )   HTML ( 0)   PDF (2015KB) ( 492 )  
Related Articles | Metrics
Nowadays it is quite easy for common users to share and exchange resources on the Internet through application software based on peer-to-peer computing mode such as Gnutella, and therefore more and more people join in peer-to-peer network to share and exchange resources. As a result, the number of peers becomes extremely large, and resources are extremely abundant and scattered. In this case, it is a challenging job to do exhaustive search to retrieve all related resources for any query. In order to solve this problem, the authors present an incremental P2P search algorithm based on social acquaintance relationship (IPSBSAR). IPSBSAR mimics behaviors of peers in social networks to establish different semantic links among peers according to the level of knowing each other, and introduces a novel access and update mode of neighbor list to do incremental search. While IPSBSAR can do incremental search, it can also avoid copyright problems. Experiment results show that IPSBSAR can achieve high incremental query hit rate with low cost and low latency, and efficiently retrieve most of relevant resources when doing exhaustive incremental search with the same query semantic.
Delay /Disruption-Tolerant Network Study
Li Xiangqun, Liu Lixiang, Hu Xiaohui, and Zeng Kaixiang
2009, 46(8):  1270-1277. 
Asbtract ( 543 )   HTML ( 0)   PDF (1080KB) ( 523 )  
Related Articles | Metrics
Although often not explicitly stated, the existing Internet model is based on a number of assumptions, such as: an end-to-end path exists between the source and its destination at all time, the maximum delay is not excessive, and the packet drop probability is small. However, a class of challenged networks, which are called DTN (delay/disrupton-tolerant network), emerge in many circumstances. DTN is greatly different from the traditional network model. It has long delay, frequent disruption, no end-to-end path simultaneously, and the network nodes have the capability of limited storage/processing. As a result, the present network protocols, such as Transfer Control Protocol (TCP), can not satisfy the applications over DTN. Some recent advanced researches in DTN are reviewed in this paper. Firstly, the main organizations working on DTN topic and typical characteristics of DTN are introduced. Then several classical overlay protocols of DTN are presented, such as bundle protocol, and some routing algorithms, including single copy routing algorithms based on knowledge and multicopy routing algorithms, are compared. Moreover, DTN researches in deep space communication, ferine animal, water quality survey and remote village connecting with Internet are discussed as real examples of DTN. Finally, the open issues and challenges of DTN in near future are proposed.
Coarse-Grained Dataflow Network Processor:Architecture and Prototype Design
Li Tao, Zhang Xiaoming, and Sun Zhigang
2009, 46(8):  1278-1284. 
Asbtract ( 517 )   HTML ( 1)   PDF (815KB) ( 506 )  
Related Articles | Metrics
Network processors (NPs) are programmable, highly integrated communications circuits optimized to support packet processing and forwarding at high rate. As an important component of routers, the design of network processors addresses the requirement of high performance while maintaining the flexibility to accommodate the future network protocols. Aimed at the limitation of ILP exploitation and the fixed topology of control-flow NP, a novel scheme and prototype of coarse-grained dataflow NP architecture—DynaNP is proposed. DynaNP not only improves the programmability of the entire NP by utilizing the control-flow structure of each processing element (PE), but also effectively exploits the task-level parallelism by introducing data-flow model into the processing among the PEs. A mechanism of dynamic configurable processing path is also provided in DynaNP to improve the overall throughput of the system. Moreover, the prototype system of DynaNP is introduced in this paper. The prototype system is designed based on SoPC (system on programmable chip). Multiple PEs and several functional modules are connected by on-chip communication network. The PEs are implemented by utilizing the embedded RISC processor core LEON3. And, the instruct-set of the LEON3 is extended to accelerate the processing of network protocols. The basic functions and the key techniques of DynaNP can be investigated through the prototype system.
A QoS-Oriented Approach for Web Service Group Testing
Deng Xiaopeng, Xing Chunxiao, Zhang Yong, and Cai Lianhong
2009, 46(8):  1285-1293. 
Asbtract ( 451 )   HTML ( 0)   PDF (1264KB) ( 429 )  
Related Articles | Metrics
The emerging technologies of Web services and SOA have supplied a new paradigm for Web application architecture. Because there will be a lot of Web services in the future, which have the similar or same functions, the selection for Web services under group testing has become an ongoing research topic. Based on the related work of Web service selection under group testing, the authors propose a QoS-oriented approach for Web service selection under group testing to overcome the localization of the existing methods, which are only based on function-optimized selection for Web services under group testing. By defining QoS vector, its characters and maximum similar degree, QoS testing oracle, QoS character testing oracles and so on, QoS vectors are clustered by using hierarchical clustering. This approach adjusts the clustering hierarchy dynamically according to maximum similar degree. After the clustering is finished, the selection for Web services under group testing is made by use of QoS testing oracle, QoS character testing oracles and decision tree. It is proved by experiment results that the approach is effective and able to overcome the localization of the existing methods which are only based on function-optimized selection for Web services under group testing.
Extension of Model Checking Tool of Colored Petri Nets and Its Applications in Web Service Composition
Men Peng and Duan Zhenhua
2009, 46(8):  1294-1303. 
Asbtract ( 580 )   HTML ( 0)   PDF (2557KB) ( 539 )  
Related Articles | Metrics
Formal description and verification of Web services composition are important research issues. To better complete the verification, an extended model checking method based on colored Petri nets is proposed. The latest version of the colored Petri net model checking tool can only determine the correctness of temporal logic formulas, and no further diagnosis information is available. Based on the local model checking algorithm of CTL (computational tree logic) in colored Petri nets, an algorithm for searching witness or counterexample of the model checking result is investigated. The algorithm is implemented using ML(meta language) language, and fluently integrated in the colored Petri net model checking tool named CPN-tools. Then, the extended CPN model checking tool can be used in the verification of Web service composition. With this approach, CPN is used to describe the behaviors of Web service composition, and CTL logic formulas are used for specifying the requirements of target Web services. The method can not only check the logical errors in Web service composition, but also tell users the reasons for the errors. This extension provides the technical guarantees for the Web service composition validation based on the colored Petri nets. Experiments demonstrate that the extension is correct and effective.
An Adaptive Alert Correlation Method Based on Pattern Mining and Clustering Analysis
Tian Zhihong, Zhang Yongzheng, Zhang Weizhe, Li Yang, and Ye Jianwei
2009, 46(8):  1304-1315. 
Asbtract ( 741 )   HTML ( 3)   PDF (2323KB) ( 549 )  
Related Articles | Metrics
Multi-step attack is one of the primary forms of the current attacks. There are some relationships among each step of attacks, such as redundancy relationship and causality relationship. But the relationships among security events are often ignored by the current intrusion detection systems (IDS), and an important problem in the field of IDS is a large volume of false positive which tends to overwhelm human operators. On the basis of analyzing the evolution and drawbacks of current alert correlation systems, a self-adapted alarming association method, A3PC, is presented based on anomaly detection ideas and centering on the concept of behavior patterns generated by alerts. The alert classification model is created by extracting association rules and series patterns in order to automatically discriminate the false alerts. At the same time, effective and condensed alerts view for administrators can be shaped based on the combinative idea of pattern mining and clustering analysis and the semiautomatic interactive processing approach. The accuracy of intrusion detection systems is thus enhanced. The DARPA intrusion scenario dataset from MIT Lincoln Lab is used to evaluate the function and performance of A3PC. The experiments results indicate that A3PC is superior to the traditional methods in accuracy, real-time and adaptivity.
Research on Neural Cryptographic Protocols
Chen Tieming, Samuel H. Huang, Liu Duo, and Cai Jiamei
2009, 46(8):  1316-1324. 
Asbtract ( 515 )   HTML ( 2)   PDF (1136KB) ( 621 )  
Related Articles | Metrics
Research on discretized-neural-network-based cryptographic protocols is a novel topic in information security domain. At the beginning of this paper, literature on such neural cryptography is surveyed, and then an interacting neural network model, named tree parity machine (TPM), is introduced with a survey of its theoretical research. Then, several important issues regarding TPM's applications for cryptography are addressed and analyzed through engineering empirical study. Two key problems, namely, the stability of weight synchronization and the security of synchronization, are investigated in detail. Two new concept methods, the threshold in neuron activation function and hash-based synchronization check, are proposed to improve the TPM-based cryptography application scheme. ANOVA and statistical experiments results show that using threshold can achieve faster and more stable weight synchronization, while using hash function can check weight synchronization precisely. Finally, attack analysis for the improved TPM-used protocol models is discussed.
An Approach to Data Sealing Based on Trusted Virtualization Platform
Wang Dan, Feng Dengguo, and Xu Zhen
2009, 46(8):  1325-1333. 
Asbtract ( 469 )   HTML ( 1)   PDF (929KB) ( 421 )  
Related Articles | Metrics
In trusted computing platform, one of the most important features is the sealing functionality which can provide strong data security by combining datas encryption storage with the platform configuration. Data is sealed to the platform configuration, and the sealed data can only be unsealed and used normally when the platform configuration at unsealing is the same as it at sealing. However, the platform configuration changes frequently with hardware exchanges, software updates and system patches, which restricts the use of the sealing functionality heavily. Aiming at this limitation, the current solutions are improved to support configuration updates based on hardware or software, but they just consider the usage of sealed data on two platforms with different configurations and the same property, which even have no implementation at all. Furthermore, the trusted platform module (TPM) has heavy burden and the efficiency is very poor in these solutions. In order to solve the problem, an approach about data sealing storage based on trusted virtualization platform is presented, which introduces the concept of virtual PCR (vPCR) and security property, and utilizes the TPM to seal data with the security property of the system. Virtual machines configurations are stored in vPCRs, and their corresponding security properties will be dynamically stored into the PCR by turns before sealing or unsealing starts. The security properties are classified by the security levels. The sealing and unsealing operation must be performed according to the rule that sealed data can be successfully unsealed only if the security level of the security property when unsealing is not less than the security level of the security property when sealing. The approach can adapt to platform configurations frequent changes, and also can protect datas security in many virtual machines without being effected by configurations changes. The operation of the approach is simple. Through experiment, it is shown that the burden of the TPM is light and there is no evident decrease in efficiency compared with the former approach.
Detecting Distributed Denial of Service Attack Based on Address Correlation Value
Cheng Jieren, Yin Jianping, Liu Yun, Cai Zhiping, and Li Min
2009, 46(8):  1334-1340. 
Asbtract ( 498 )   HTML ( 3)   PDF (811KB) ( 497 )  
Related Articles | Metrics
Detecting distributed denial of service (DDoS) attacks is currently a hot topic in the network security field. The characteristics of DDoS attacks and the existing methods to detect DDoS attacks are analyzed, and a novel detection scheme for DDoS attacks based on address correlation value (ACV) is proposed. ACV is designed to reflect the essential features of DDoS attacks, such as the abrupt traffic change, flow dissymmetry, distributed source IP addresses and concentrated target IP addresses. To increase the detection accuracy in various conditions, ACV time series are transformed into a multidimensional vector (MV) by estimating the auto regressive (AR) model parameters using the Yule-Walker method, and then MV is used to describe the state features of network flows. Furthermore, a support vector machine (SVM) classifier, which is trained by MV of ACV time series from normal flow and attack flow, is applied to classify the state of current network flow and identify the DDoS attacks. The experimental results show that ACV time series can be well used to characterize the different state features between DDoS attack flows and normal flows; the scheme can identify the state features of the abnormal flow due to the DDoS attacking flows, and detect DDoS attacks accurately and reduce the false positive drastically.
A Fast and Exact Single Pattern Matching Algorithm
Fan Hongbo, and Yao Nianmin
2009, 46(8):  1341-1348. 
Asbtract ( 415 )   HTML ( 2)   PDF (895KB) ( 550 )  
Related Articles | Metrics
String matching problem is a fundamental problem in computer science. It is the key problem of many important scopes such as network security, information retrieval and filtration, computational biology, etc. And the design of exact single pattern string matching algorithm with high performance is the basis of all string matching problems. In this paper, the authors improve one of the fastest exact single pattern matching algorithms known on English text, which is SBNDM2. The simplest form of the BNDM core loop is obtained, in which there are only 5 instructions per character read by amending the relationship between position in the pattern and bit in the bitmask. And a cross-border protection method is added to the algorithm in order to reduce the cost of cross-border inspection. Two algorithms named S2BNDM and S2BNDM′ are presented. The experimental results indicate that both S2BNDM and S2BNDM′ are faster than SBNDM2 in any case. It can be considered that S2BNDM is the fastest algorithm on English text (2
Image Copy Detection with Rotation and Scaling Tolerance
Zou Fuhao, Li Xiaowei, Xu Zhihua, Ling Hefei, and Li Ping
2009, 46(8):  1349-1356. 
Asbtract ( 541 )   HTML ( 3)   PDF (1540KB) ( 579 )  
Related Articles | Metrics
Currently, most of image copy detection methods can successfully resist to the noise-like distortion, but they are quite fragile to geometric distortion, such as rotation, shift, translate, scale, cropping and so on. Among the geometric distortions, rotation and scaling most commonly happen. In order to really resist against rotation, shift, scale and crop distortion, Wu et al. proposed an ellipse track division based image copy detection method in 2005 and 2007 respectively. However, because of ellipse shape without rotation invariance property, Wu's method didn't really address rotation distortion problem. To completely conquer the rotation and scale distortion issues, the authors propose a novel image copy detection scheme which combines cirque division strategy with ordinal measure method to extract compact image feature. It is well known that the content within cirque track region will be almost invariant before and after rotation and scale distortion, so the proposed method can successfully resist the above-mentioned two kinds of distortion. In addition, since the ordinal measure based feature vector is insensitive to local or global slight changes among image content, so the noise-like distortion can be effectively conquered. The experimental results show the proposed method is better than that of Wu at the aspect of rotation and scale distortion.
Coalition Structure Generation with Worst-Case Finite Bound from the Optimal Guarantees
Hu Shanli, Li Shaofang, and Shi Chunyi
2009, 46(8):  1357-1363. 
Asbtract ( 383 )   HTML ( 0)   PDF (756KB) ( 367 )  
Related Articles | Metrics
Coalition formation is a key topic in multi-agent systems. However, finding the optimal coalition structure is NP-complete. Sandholm et al. show that it is necessary and sufficient to search the lowest two levels of the coalition structure graph in order to establish a worst-case bound k. How to do a further search after the lowest two levels of the coalition structure graph is a problem which hasn't been resolved for a long time. Dang et al. have presented an algorithm: for the odd bound k≥3, those coalition structures are further searched whose biggest coalition's cardinality is not less than n(k-1)/(k+1), which is the first algorithm known whose searching unit is not level and it is obviously faster than that of Sandholm et al for the smaller bound. The authors analyze the relations among coalition structures deeply and present an algorithm that only needs to search those coalition structures whose biggest coalition's cardinality is equal to n(k-1)/(k+1) to decrease largely the number of coalition structures needed to be searched. Moreover, the algorithm is improved to search skillfully those corresponding levels whose coalition structures are fewer than some of those coalition structures whose biggest coalition's cardinality is equal to n(k-1)/(k+1), therefore decreasing further the number of coalition structures needed to be searched and improving largely the work of Sandholm et al. and Dang et al.
Prediction and Abnormal Behavior Detection of Agent Dynamic Interaction Trust
Tong Xiangrong, Huang Houkuan, and Zhang Wei
2009, 46(8):  1364-1370. 
Asbtract ( 654 )   HTML ( 2)   PDF (819KB) ( 502 )  
Related Articles | Metrics
Computation of trust is an interesting research direction in agent and multi-agent systems theory. There have been a lot of researches on agent trust and reputation in the past few years, such as TRAVOS model presented by Teacy and FIRE model presented by Huynh. However, previous work is only based on the average probability of historical interaction and there is a lack of attention to dynamic variety of agent trust. So the ability of precise prediction and abnormal behavior detection of trust are not satisfied. Due to these problems, using the probability theory, a computational model of agent interaction trust (CMAIT) where historical interaction is divided by time is proposed. Furthermore, based on derivative of trust, the computational confidence of computational information and computational confidence of deviation of CMAIT are given. The mechanism of detection of abnormal behavior of CMAIT is also given. It is proven that trust of average historical interaction is a particular case of CMAIT. Experiments are conducted on Web e-commerce at Taobao website. Experimental results demonstrate that computational error of CMAIT is half of that of TRAVOS and its computational complexity is also low. It can be applied in the detection of abnormal behavior and the prediction of future behavior. It improves the work of Jennings on agent trust.
An Automated Test Data Generation Algorithm Based on Selective Redundancy
Li Junyi, Li Renfa, and Sun Jiaguang
2009, 46(8):  1371-1377. 
Asbtract ( 363 )   HTML ( 0)   PDF (738KB) ( 443 )  
Related Articles | Metrics
Automated test data generation has become a hot point in the research of software tests, and lots of useful models and methods have been proposed by researchers, but the performances of these existing schemes are not very satisfactory. So, it is very important to study how to design new automated methods with high performances for test data generation. Based on selective redundancy, a new automated test data generation algorithm is proposed, which firstly adopts methods such as linear approximation and minimization of branch functions to find out all feasible paths and automatically generate original test data suite for partly feasible paths and then subjoins test data based on selective redundancy for predicates and sub-path pairs that have not been covered by the original test data suite when test data suite cannot be obtained by using linear approximation and minimization of branch function methods. This new algorithm, combined with predicate slice and DUC expression of functions, can determine whether the sub-path is feasible from the source point. It can also effectively decrease the adverse influence of infeasible path on the algorithm performance. Algorithm analysis and experiment results show that the new algorithm can reduce the size of test data suite effectively and improve the test performance.
Operating and Analyzing the Reproducibility of Empty Marking Nets
Ye Jianhong, Song Wen, and Sun Shixin
2009, 46(8):  1378-1385. 
Asbtract ( 448 )   HTML ( 0)   PDF (920KB) ( 360 )  
Related Articles | Metrics
To reproduce the empty marking, there is a necessary and sufficient condition in which a non-negative T-invariant exists, whose net representations have neither siphons nor traps, containing a positive entry for at least one fact and goal transition. This result is extended and it is proved that the operations of composition, insertion, deletion and substitution do not influence the reproducibility of the empty marking. Moreover, some properties related to the empty marking net are discussed, For example, a net with reproducibility of the empty marking has preserved the reproducibility in its inversed net, and the empty marking in acyclic P/T nets with a positive entry for at least one fact and goal transition is reproducible if and only if the net is covered by T-invariant. In particular, if a Horn-net satisfies all the above conditions and it is acyclic, then the T-invariant is realizable. These theoretical results show that there are interesting connections to other notions, for example, to the forward and backward liveness of the empty marking. On the other hand, there are application oriented aspects of those results. Examples can be found in proving complex logic inference and checking throughness of workflow logic net. Finally its algorithm is proposed.
Transactional Memory System
Peng Lin, Xie Lunguo, and Zhang Xiaoqiang
2009, 46(8):  1386-1398. 
Asbtract ( 516 )   HTML ( 0)   PDF (1412KB) ( 552 )  
Related Articles | Metrics
Exploring multi-core processors performance depends on the parallelism in programs. Shared memory model is widely adopted by multi-core processors, and coordinating access to shared variables among threads is the key and a very difficult problem as well. Lock mechanism is widely used for the synchronization between threads. However, conventional locking techniques in parallel systems have some common problems: priority inversion, convoying and deadlock. Transactional memory (TM) is proposed to make synchronization easy to realize and efficient, which adopts the transaction concept in database systems. Transactions in transactional memory with the atomicity, consistency and isolation properties provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results and execute atomically. Transactional memory attracts a lot of research interests now. Transactional memory research is still in progress and has not been systematized yet. The authors introduce the origin of transactional memory, and attempt to give a definition of transactional memory system. Transactional memory programming interface and its execution model are shown. TMs design space and its realization strategies are discussed and different techniques used in these aspects are compared. Transactional memory is not a parallel programming panacea. Several open issues are discussed then. Finally, a few open source research platforms are introduced.
A Novel Method Containing the Radial Error Propagation in Self-Servowriting in Hard Disk
Zhao Xiaogang, Wang Haiwei, and Xie Changsheng
2009, 46(8):  1399-1407. 
Asbtract ( 383 )   HTML ( 0)   PDF (1187KB) ( 315 )  
Related Articles | Metrics
Servo information can be used to tell the relative position of the magnetic head and data track. So it must be written in the manufacture process of hard disk drive (HDD). But traditional servo information writing process needs the help of servo track writer and the use of clean room. These cost a lot to HDD manufacture. The rapid increasing track density also compacts much to the servo information writing process. Self-servowriting process can use the equipments of HDD to implement the process of writing servo information. It can improve the productive efficiency of HDD manufacture. The process of self-servowriting and the problem that is critical to self-servowriting are discussed. The principle of self-servowriting and the phenomenon of radial error propagation are analyzed. Radial error propagation is caused by noise disturbance, “seed” track defect and deep reliance on last track shape. Then two different methods which could contain the error propagation are compared and their shortcomings are also discussed. Next, a new weighted correction signal method is described in details. Its effectiveness and advantage are proved in theory and in simulation test. The simulation shows it could decrease track absolute radial error and relative radial error simultaneously.
Replica Placement and Update Mechanism for Individual QoS-Restricted Requirement in Data Grids
Fu Wei, Xiao Nong, and Lu Xicheng
2009, 46(8):  1408-1415. 
Asbtract ( 408 )   HTML ( 1)   PDF (1161KB) ( 466 )  
Related Articles | Metrics
As an important strategy for reducing access time, balancing overload, and obtaining high data availability, replication is widely used in distributed systems such as data grids. Currently grid researchers often consider replica management problem from the viewpoint of whole systems, ignoring individual QoS requirements of each single request. With the rapid growth of QoS-sensitive data grid applications, fulfilling QoS requirements of every request becomes a big challenge. In this paper, a new problem called individual QoS-restricted replica placement problem is defined, which is proved to be NP-hard. As traditional methods cannot solve such problem, a QoS-restricted data grid model named IQDG is then built. Based on IQDG, a novel replica placement algorithm called qGREP is presented. The algorithm utilizes profit value as heuristic information, which considers both the fulfillment of individual QoS-restriction and the diversification of replica cost. In order to guarantee replica consistency, a logical ring-based updating mechanism is also proposed. IQDG takes individual QoS requirements of each request into account, and obtains rational replica policy by achieving tradeoff between the consistency cost and the access efficiency. Analysis reveals the internal relationship between network topology, user access pattern and overload. Experiments show that the model and algorithm are feasible and effective in complex data grid environment and can achieve good performance under different simulating conditions.