Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 December 2010, Volume 47 Issue 12
Paper
Binary Representation of Similarity
Yu Jian
2010, 47(12):  . 
Asbtract ( 220 )   PDF (700KB) ( 474 )  
Related Articles | Metrics
In pattern recognition and machine learning, similarity plays an important role. It is well known that similarity has different definitions. Discussed in this paper are different explanations of similarity under exemplar theory and prototype theory. As for exemplar theory, similarity is defined as pair similarity between two objects; and for prototype theory, similarity is defined between an object and a prototype. According to the above analysis, it is pointed out that almost nonnegative physical measures have their interpretations of similarity and similarity measure reflects some global properties in some sense. For instance, similarity can offer a new interpretation for image and fuzzy set. More importantly, also introduced is a new interpretation of Wertheimers contrast invariant principle from similarity point of view. When using similarity, a binary decision: yes or no, is often made. Therefore, it is very interesting to get a binary representation of similarity. A mathematical model between similarity and a binary variable is established by using Taylor expansion. Based on such a result, the binary representation of nonnegative bounded matrix is presented, which leads to the optimal binary decomposition of the similarity matrix. As many applications use similarity matrix, such model is potentially useful.
A Simulation Approach Based on the Notion of Emergence for Analyzing MAS Trust Model
Ren Chuanjun, Huang Hongbing, and Jin Shiyao
2010, 47(12):  . 
Asbtract ( 375 )   PDF (3061KB) ( 372 )  
Related Articles | Metrics
The research attention of MAS (multi-agent system) trust model is now focusing on the devising of trust mechanisms. But the macro properties implied by these micro mechanisms are not obvious. So, all of the researches apply modeling and simulation for analyzing the macro properties generated by trust mechanisms. However, lots of these simulation experiments lack proper guidance by simulation theory and sometimes can not exactly reflect the properties. This paper introduces the notion of emergence in system science to investigate MAS, and develops a simulation approach based on this notion for analyzing the MAS trust model. Starting with the relatively clear behavior of MAS micro level (i.e. agents behavior), considering sufficient MAS macro constraints, and building up a runnable simulation system, this approach studies the macro problems and the micro-macro relations of the MAS trust model. Accordingly in this paper, at the micro level, the design scheme of trust mechanisms is analyzed and summarized, and based on this, a framework for implementing simulation agent is proposed. At the macro level, several system issues that must be taken into account in simulation-based MAS analysis are discussed, including macro constraints, threat model, evaluating metric, and issues in simulation running. Finally, this method is validated via several simulation experiments.
A Privacy-Preserving Data Publishing Algorithm for Clustering Application
Chong Zhihong, Ni Weiwei, Liu Tengteng, and Zhang Yong
2010, 47(12):  . 
Asbtract ( 310 )   PDF (860KB) ( 549 )  
Related Articles | Metrics
Privacy has become a more and more serious concern in applications involving micro-data. Recently, privacy-preserving data publishing has attracted much research work. Most of the present methods focus on categorical data publishing, and the potential applications are mainly for aggregate querying, frequent pattern mining and classification. Concerning the problem of publishing numerical data for clustering analysis, definitions of individual data record and common data record are introduced by making density analysis within the neighborhood of a given record, which can describe the effect of each data record on maintaining clustering usability. Furthermore, positive neighborhood and negative neighborhood are designed for individual data record respectively. Based on the above definitions, a data obfuscating method NeSDO is proposed, which realizes privacy-preserving data publishing by substituting primitive micro-data values with synthetic statistical values of some suitable data subset. For an individual data record, average value of records in its negative neighborhood(or positive neighborhood) is adopted to substitute corresponding items of this record. For a common data record, average value of records in its k nearest neighborhood is adopted vice versa. Theoretical analysis and experimental results indicate that the algorithm NeSDO is effective and can preserve privacy of the sensitive data well meanwhile maintaining better clustering usability.
Efficient AttributeBased Ring Signature Schemes
Chen Shaozhen, Wang Wenqiang, and Peng Shujuan
2010, 47(12):  . 
Asbtract ( 462 )   PDF (874KB) ( 543 )  
Related Articles | Metrics
Two new attribute-based ring signature schemes are proposed in this paper. The signer can sign message with parts of its attribute. All users that possess these attributes can form a ring. Anyone out of this ring could not forge the signature on behalf of the ring. The first one is existentially unforgeable against selective attributes attacks in the random oracle model and the second construction is existentially unforgeable against selective attributes attacks in the standard model. Both schemes in this paper rest on the hardness of the computational Diffie-Hellman in tractability assumption. For anonymity, it requires that the signer is anonymous among the users with the same attributes for signature, even for the attribute center. In attribute-based ring signature scheme the signer need not know who are involved in this ring, so these new schemes are more efficient and flexible than the previous identity-based ring signature schemes. Compared with the existing attribute-based ring signature scheme, the size of the signature decreases by 1/3, and the pairing operations in our schemes also decrease by 1/3. Thus new schemes are more efficient in the communication cost and the computational cost. Attribute-based ring signature is useful in many important applications such as anonymous authentication and attribute-based messaging systems.
Algorithm Analysis and Efficient Parallelization of the Single Particle Reconstruction Software Package: EMAN
Fan Liya, Zhang Fa, Wang Gongming, and Liu Zhiyong
2010, 47(12):  . 
Asbtract ( 515 )   PDF (2505KB) ( 411 )  
Related Articles | Metrics
Single particle reconstruction is one of the most important technologies for determining three-dimensional structures of macromolecules. In recent years, it has been given more and more attention, because of some of its distinct features. Unfortunately, its application is greatly constrained, due to its extremely long processing time and lack of efficient parallel implementations. This study optimizes and parallelizes one of the most widely-used software packages for single particle reconstruction: EMAN. By analyzing algorithms of its major components, the authors find that the key problem is achieving ideal load balancing with low communication costs. A self-adaptive dynamic scheduling algorithm is introduced to solve this problem. It is not only applicable to EMAN, but also to other similar scheduling problems with independent tasks. Actual experiments show that through optimization, serial execution time of our implementation is 11.50% less than that of EMAN. Besides, thanks to the self-adaptive scheduling algorithm, our implementation produces much higher speedups than EMAN. Speedups of the most time-consuming classification component are close to linearity. Moreover, parallel efficiency of our implementation on 16 CPU cores is 29.8% higher, compared with the implementation of EMAN. Therefore, our implementation is capable of making full use of available computing resources, dramatically reducing the processing time of single particle reconstruction.
A Method to Adjust Minutiae Location and Direction in Nonlinear Distorted Fingerprint Image
Chen Hui, Yin Jianping, and Zhu En
2010, 47(12):  . 
Asbtract ( 300 )   PDF (1730KB) ( 439 )  
Related Articles | Metrics
Nonlinear distortion is common in sensor-acquired fingerprint images. It degrades the performance of fingerprint verification systems severely. Especially for minutiae-based matching algorithm, which is most widely adopted, both the location and direction of minutiae are changed because of distortion. Based on the result of careful observation and examination on kinds of distorted fingerprint images,some useful characteristics of minutiae deviation on nonlinear distorted fingerprint image are found, and further used for minutiae adjustment. In this paper, a method to adjust the minutiae position and direction on the basis of multiple reference minutiae alignment method is proposed. However, other effective alignment methods can also be taken instead. Query fingerprint image and template fingerprint image are aligned first using multiple reference minutiae alignment method, and some landmark minutiae pairs from two images are computed. Then minutiae around each paired landmark minutia are adjusted, and comparison between adjusted query minutiae set and template minutiae set is made during the matching process. Compared with other typical methods, this method is easier to be added to existing minutiae-based matching algorithms while no additional feature but minutia is needed. Experimental results on FVC2004 DB2 show that this method can improve the matching performance clearly.
A Survey of Web Page Cleaning Research
Mao Xianling, He Jing, and Yan Hongfei
2010, 47(12):  . 
Asbtract ( 583 )   PDF (2402KB) ( 426 )  
Related Articles | Metrics
The rapid development of the Internet has made a variety of Web applications and Web data, which become the major source of data for lots of research. Web page includes a variety of content, such as advertising, navigation bar, related links, text, etc. However, for different studies and applications, not all content is necessary; oppositely, the unrelated content will affect the effectiveness and efficiency of the research and applications. So Web page cleaning is a highlighted topic of information retrieval with booming search engines. Thus it is necessary to sum up the field on the page denoise, in order to better carry out in-depth study. Firstly, this paper gives a brief introduction to the necessity of Web page cleaning and its related concepts. The authors present a classification hierarchy of the Web page cleaning methods, including the single-model based Web page cleaning methods and the multi-model based Web page cleaning methods. Then, this paper summarizes all kinds of Web page cleaning techniques and frameworks, including SST, Shingle, Pagelet, DSE, etc. Thirdly, this paper describes the experimental datasets and experimental methods used in all kinds of Web page cleaning techniques. Finally, this paper discusses the existing problems and the future directions in the Web page cleaning field.
Resource Assignment Algorithm Under Multi-Agent for P2P MMOG
Luo Jia, Chang Huiyou, and Yi Yang
2010, 47(12):  . 
Asbtract ( 276 )   PDF (1057KB) ( 494 )  
Related Articles | Metrics
The massive multi-player online game based on peer-to-peer overlay is a research hotspot of network game. To establish an effective mechanism of interest management, most of the approaches organize nodes by using single collaborator to manage the area of interest. As the number of players and resources increases in an area, the collaborator would catch the performance bottleneck. This paper proposes a structured multi-agent model which divides the world into static areas of interest and introduces a manager to organize nodes in each area. Other nodes acting as agents are responsible for processing events. To make interactions between nodes and resources in the same area and neighbor areas, SMA (structured multi-agent) implements the node joining algorithm and neighbor discovery algorithm to establish connections between all nodes. Meanwhile, it calculates the cost indices of resources and performance indices of nodes, so that each node can hold several resources whose sum of cost indices is quite with its performance indices. By this way, resources are assigned evenly to all nodes in an area. Since the state of resource can only be updated by one node at any time, the consistency of all states are greatly guaranteed. Experiment shows that this model has fine advantages in many parts.
Overview of Botnet Detection
Wang Hailong, Gong Zhenghu, and Hou Jie
2010, 47(12):  . 
Asbtract ( 515 )   PDF (1516KB) ( 617 )  
Related Articles | Metrics
With the rapid development of botnet, the Internet has been facing the growing and disastrous threats. These threats can disable the infrastructure and cause the financial damages, which leads to a severe challenge for the global network security. In order to defense and counter the botnet, the detection is absolutely the basis. Therefore, the research on botnet detection has recently become a hot topic in the field of network security. After analyzing the proposed detection techniques, the authors present the basic process of botnet detection, and make classification for these techniques. Furthermore, according to the different stages of the life cycle of botnet, i.e., propagation, infection, communication and attack, they go into detail about main idea, detection process, merits and shortcomings of the existing techniques. Then, they summarize the approaches and the corresponding algorithms used in the detection techniques, propose the evaluation indices in the six dimensions of source, scope, real-time, accuracy, applicability and flexibility, and compare the representative techniques based on these indices. Later, they discuss the key issues of botnet detection in the fields of multi-source information collection and fusion, essential feature extraction, detection of communication and behavior, correlation analysis and detection architecture. Finally, future research trends are reviewed.
A Fast Motion Detection Method Based on Improved Codebook Model
Xu Cheng, Tian Zheng, and Li Renfa
2010, 47(12):  . 
Asbtract ( 385 )   PDF (1924KB) ( 457 )  
Related Articles | Metrics
Identifying moving objects from a video sequence is a fundamental and critical task in many computer vision applications, which aims at detecting regions corresponding to moving objects such as vehicles and people in natural scenes. Considering that the existing codebook model algorithm (CBM) can not quite correspond to the computational feature under RGB color space, and does not give simultaneously attention to perturbation resistance and segmentation capability, a fast motion detection method based on improved codebook model is proposed. Pixels are converted from RGB space to YUV space to build the codebook model, which can reduce the computational complexity. After that, the luminance component of each codeword is modeled by the Gaussian model, in order that the codebook model can get the characteristic of the Gaussian mixture model (GMM). So the improved method can combine the advantages of the GMM on the premise of keeping the characteristic of the codebook model. In addition, the method is tested by the typical video sequences, and then the perturbation detection rate (PDR) curves are drawn. Comparative data show that the improved method is more efficient on background segmentation than the CBM algorithm under the RGB space, and can attain a higher capability of anti-perturbation and more adaptively than the traditional CBM and GMM algorithms.
Construction of Local Adjustable C2 Parametic Quartic Interpolation Curve
He Ping, Zhang Caiming, and Zhou Jingbo
2010, 47(12):  . 
Asbtract ( 272 )   PDF (890KB) ( 386 )  
Related Articles | Metrics
To construct interpolation curve and surface with quartic polynomials as basic functions is an effective approach. It makes the curve and surface constructed have the advantages such as having simple construction and being easy to compute. In applications, C2 curves and surfaces satisfy the requirements of the most applications. In shape design, the freedom degrees can be used to increase the flexibility of design and construction to control the shapes of the curve and surface, therefore making the shape of the curve and surface more desirable. The authors discuss the problem of constructing local adjustment C2 quartic spline interpolation curve. The method for determining the freedom degrees with local method is presented. The tangent vector at every data point is identified through the localized quadratic spline function at first, and the tangent vectors and data points determine approximately the shape of the quartic spline curve. Then the freedom degrees are determined by minimizing the change rate of the spline curve. For the imperfect part of the spline curve, the corresponding tangent vectors are modified by the following way. A desirable moving vector is defined which makes the curve have better shape if it varies along the moving vector. An objective function is defined by the integral of the squared vector product of the moving vector and the tangent vector of the curve. The imperfect part of the curve is modified by minimizing the objective function. The comparisons of the new method with other methods and the examples of locally adjusting the curve by minimizing the vector product are also included.
Focused Web Entity Search Using the Linked-Path Prediction Model
Huang Jianbin and Sun Heli
2010, 47(12):  . 
Asbtract ( 270 )   PDF (1602KB) ( 397 )  
Related Articles | Metrics
Entity search is a promising research topic because it will provide Web information in detail to the users. A key problem of entity search is collecting Web pages quickly and completely for the relevant entities on a specific domain. To deal with this issue, a website is modeled as a graph on a set of connected important states. Then a novel algorithm named LPC is proposed to learn the optimal link sequences leading to the goal pages which entities are embedded in. The LPC algorithm uses a two-stage strategy. In the first stage, it uses an undirected graphical learning model CRF to capture sequential link patterns leading to goal pages. The conditional exponential models of CRF are able to exploit a variety of features including state and transition features extracted around hyperlinks and HTML pages. In the second stage, the links in the crawling frontier queue are prioritized based on reinforcement learning and the trained CRF model. A discount reward approach from reinforcement learning is employed to compute the reward score using the CRF model learnt during path classification phase. The experimental results on massive real data show that the optimal prediction ability of CRF helps LPC outperforms other focused crawlers.
A Density-Based Local Outlier Detecting Algorithm
Hu Caiping and Qin Xiaolin
2010, 47(12):  . 
Asbtract ( 598 )   PDF (936KB) ( 811 )  
Related Articles | Metrics
With rapid growth of data, data mining becomes more and more important. Detecting outlier is one of the very important data mining techniques, which is to find exceptional objects that deviate from the most rest of the data set. There are two kinds of outliers: global outliers and local outliers. In many scenarios, the detection of local outliers is more valuable than that of global outliers. The LOF algorithm is a very distinguished local outlier detecting algorithm, which assigns each object an outlier-degree value. However, when the outlier-degree value is calculated, the algorithm should equally consider all attributes. In fact, different attributes have different effects. The attributes with more large effects are known as outlier attributes. In this paper, a density-based local outlier detecting algorithm (DLOF) is proposed, which educes outlier attributes of each data object by information entropy. The weighted distance is introduced to calculate the distance of two data object, which those outlier attributes are assigned with bigger weight. So the algorithm improves outlier detection accuracy. In addition, when the local outlier factors are calculated, we present our two improvements of the algorithm and their time complexity analysis. Theoretical analysis and experimental results show that DLOF is efficient and effective.
A Template Technology for Transplanting from Single-GPU Programs to Multi-GPU Programs
Li Jianjiang, Li Xinggang, Lu Chuan, and Fan Shaoming
2010, 47(12):  . 
Asbtract ( 386 )   PDF (1552KB) ( 378 )  
Related Articles | Metrics
Graphics processing unit (GPU) has gained more and more attention as a kind of processor architecture with high parallelism, followed by various general purpose GPU computing technologies represented by NVIDIA CUDA. Multi-GPU parallel computing has also attracted many researchers. Involved by co-ordination and interaction, multi-GPU computing has a higher requirement on data division and data communications. To reduce the complexity while developing multi-GPU software, the authors propose a code generating technology based on templates, which can generate an OpenMP+CUDA source code in order to support parallelizing transplantation from a single GPU program. It uses simple guide statements to describe the data division and communications and finally generate multi-threaded CUDA memory management API in C language, in order to wrap the CUDA memory operation and simulate the CPU-GPU and GPU-GPU data transfer. So it is easy to auto-generate multi-GPU procedures from single-GPU ones. Finally, the authors provide a sample CUDA program for solving Laplace equation using Gauss-Seidel iteration showing the input of the template system. The transplanted result shows that although the generated code may have more synchronization statements, the performance lost is small and can be omitted. Therefore, by making use of the template system, the authors achieve a notable drop in the cost of multi-GPU CUDA programming and an improvement of the CUDA programmers productivity.
The Information Model for a Class of Imperfect Information Game
Ma Xiao, Wang Xuan, and Wang Xiaolong
2010, 47(12):  . 
Asbtract ( 341 )   PDF (1637KB) ( 566 )  
Related Articles | Metrics
With the increasing research works on imperfect information games(IIGs), different kinds of Imperfect information games have been studied which vary a lot in their information properties. Generally, the information sets and the relations between them decide an IIGs information property, but sometimes the game tree can be transformed to a more efficient form—the “flatten” form. The flatten form needs less memory and can accelerate the search process. The flatten form can describe the relations between information sets but it has difficulty in recording the composition of information sets, so how to represent and manage the composition information of information sets in the flatten form turns to be a new problem. In this paper, a new concept of IIG—imperfect information space is introduced and two types of IIGs are studied. Then, a novel general information model based on bipartite graph is proposed. With the help of information model, we study the information acquisition problem and use the Markov network to manage information. The Markov network learns the dependency of the attitudes in our information model from the archives of human games automatically, that helps us to build the information model without expert knowledge and makes the model to be more impersonal. The experiments on Siguo game show the effectiveness of our general information model on the acquisition and management of imperfect information, and also prove the efficiency of the Markov network.
Improved Single Image Dehazing Using Dark Channel Prior
Hu Wei, Yuan Guodong, Dong Zhao, and Shu Xueming
2010, 47(12):  . 
Asbtract ( 527 )   PDF (4519KB) ( 831 )  
Related Articles | Metrics
The dark channel prior is a statistics of the haze-free natural outdoor images. Using dark channel prior to estimate the thickness of the haze, recent research work has made significant progresses in single image dehazing. However, it is difficult to apply existing method for processing high resolution input images because of the heavy computation costs of it. For some kinds of input images, existing method still can not reach enough accuracy which is required by visually-pleasing results. Motivated by this and based on thorough analysis of input image data, a kind of novel image prior, so-called gradient prior of transmission maps, has been proposed in this paper. Combining it with multiple-resolution image processing routine, we develop a powerful and practical single image dehazing method. The experimental results show our gradient prior of transmission maps greatly reduces the computation costs of the previous method. Furthermore, the optimization methods and parameter adjustment for our novel image prior enhance the accuracy of the computation related with transmission map, which results in better image quality under the previous method's ill-situation. Overall, compared with the state of the art, our new single image dehazing method achieves the same, and even better image quality with only around 1/8 computation time and memory cost.
High Precision Timestamps in Network Measurement
Xie Yingke, Wang Jiandong, Zhu Chao, Zhao Zili, and Han Chengde
2010, 47(12):  . 
Asbtract ( 416 )   PDF (4165KB) ( 658 )  
Related Articles | Metrics
Timestamps are used to record receiving and transmitting time for each packet and are of great importance in measuring network performance metrics such as delay, bandwidth and jitters. Limited by the uncertainty of packet buffering delay and interrupts, software-based timestamps can only achieve milliseconds-precision. Although timestamps based on global positioning system (GPS) can achieve nanoseconds-precision, it is too expensive and inconvenient for large scale use. In this paper, we analyze the root cause of inaccuracy for software-based timestamps. Based on these analyses, we propose a timestamp system which involves programmable network interface card (NIC) and prediction-based clock synchronization (PCS) algorithm. When a packet arrives at the NIC, a hardware timestamp is generated in the NIC which precisely records the accurate time. As the timestamp is inserted by hardware, all the uncertainty introduced by software are eliminated. PCS algorithm is used to synchronize the clock of one NIC with other NICs. With one NIC set to be master, other slave NIC will adjust their clock according to PCS algorithm to synchronize with the master's clock. We implement the prototype system in our gigabit network interface card. The system achieves the same accuracy as GPS does. Test results show that the deviation between two cards is no bigger than 100ns.
Writing Mechanism in Digital Organism File System
Qiu Yuanjie and Liu Xinsong
2010, 47(12):  . 
Asbtract ( 338 )   PDF (1763KB) ( 359 )  
Related Articles | Metrics
Because files in P2P file system have too many copies and those copies are saved in a large-scale network, and the network's delay is very large, so many P2P file systems haven't writing mechanism. Some P2P file systems, like Pond and Ivy, have writing mechanism, but their performance is not good. Digital organism file system is a new file system. It has implemented an efficient writing mechanism. In this writing mechanism, the definition of updating group is given. The idea of updating agent based on updating group is proposed. Streaming updating mechanism, which includes an algorithm to avoid write chaos and an algorithm to control parallel degree and checks the writing error, is proposed. This writing mechanism introduces core update copy set to rule out the copies with low updating speed. The select algorithm for core update copy set is given in this paper. As shown in many tests, updating agent based on updating group idea is helpful for reducing the network spending. In the writing process, good core update copy set will come into being. In the network with large delay, the writing speed of streaming updating is higher than that of synchronous updating. Testing results show that the performance of DOFS is better than that of similar systems.
CrossDomain Opinion Analysis Based on RandomWalk Model
Wu Qiong, Tan Songbo, Xu Hongbo, Duan Miyi, and Cheng Xueqi
2010, 47(12):  . 
Asbtract ( 307 )   PDF (1649KB) ( 359 )  
Related Articles | Metrics
Nowadays, more and more people express their opinions on products, books, movies, etc. at review sites, forums, discussion groups, blogs and so on. Determining the opinion of a given document from Web (that is, opinion analysis) has drawn much attention. To guarantee the accuracy of opinion analysis, many methods for opinion analysis require abundant labeled data. But the labeled data in different domains are very imbalanced. So in recent years, some studies have been conducted to deal with cross-domain opinion analysis problems. However, most of the attempts rely on only the labeled documents or the labeled sentiment words, so this kind of methods fail to uncover the full knowledge between the documents and the sentiment words. This paper proposes an approach for cross-domain opinion analysis based on random-walk model by simultaneously utilizing documents and words from both source domain and target domain. The approach can make full use of the mutual reinforcement between documents and words by fusing four kinds of relationships between documents and words, that is, the relationships between documents, the relationships between words, the relationships between words and documents, and the relationships between documents and words. Experimental results indicate that the proposed algorithm could improve the performance of cross-domain opinion analysis dramatically. The average accuracy of the proposed approach is about 15% higher than traditional classifiers, and about 7% higher than the state-of-the-art method.
Recovering the Use Case from Object-Oriented Programs by Static Analysis
Ye Pengfei, Peng Xin, and Zhao Wenyun
2010, 47(12):  . 
Asbtract ( 366 )   PDF (2226KB) ( 324 )  
Related Articles | Metrics
In the process of software maintenance, the maintainer can obtain helpful information via reading the use case documents. The problem in real software maintenance is that the maintainer can only obtain out-of-date or incomplete information of the use case. To solve this problem, a novel approach is proposed to identify the use case from object-oriented source code in the function logic level of the software system. The analysis of the behavior protocol of the high level decade classes, which interact with user interface in an object-oriented system, is involved to help discovering the high level system running scenarios which are high level parts of the discovered use cases. Then the conventional branch-reserving call graph is extended to an object-oriented compatible version called object-oriented branch-reserving call graph (OO-BRCG). After appropriate pruning process applied on the OO-BRCG, each running path leftover in the pruned OO-BRCG is regarded as a low level part of the discovered use cases. Combining the high level and low level use parts can get the complete use cases. An experiment for this method on an object-oriented system in real world is performed. The results from the experiment show that this approach can gain a very high recall with acceptable loss on the precision. The experiment has confirmed the overall effectiveness of this approach.