Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2009, Volume 46 Issue 9
Paper
A New Method for Fast Simulation of 3D Clouds
Liu Shiguang, Chai Jiawei, and Wen Yuan
2009, 46(9):  1417-1423. 
Asbtract ( 549 )   HTML ( 3)   PDF (1388KB) ( 607 )  
Related Articles | Metrics
The dynamic cloud simulation has been widely used in computer games, flight simulation, movie and TV advertisement, etc. Realistic simulation of cloud scene has been a research hotspot in the field of computer graphics. Although there are many previous works about simulation of static cloud scene, few attentions has been paid to simulation of dynamic cloud scene. The current methods of dynamic cloud simulation fall into two categories. One is heuristic methods and the other is physics based methods. The former can achieve fast rendering rates, but the results are less realistic. The latter can generate more realistic results. However, they need a large amount of computation. So a method which can take both rendering results and rendering rates into account is needed. The authors present a novel method for a fast simulation of 3D dynamic cloud scene. First, the velocity field of dynamic cloud is obtained by solving the physical equations of motion of dynamic cloud in a grid with relatively low resolution. Then, the texture advection is employed to add the details of the appearance of cloud. By fully accelerating with GPU (graphics processing units), high rendering rates are achieved. Finally, the method is further optimized and can be applied in many types of hardware.
Image Segmentation with Hierarchical Mean Shift
Tang Yang, Pan Zhigeng, Tang Min, Pheng Ann Heng, and Xia Deshen
2009, 46(9):  1424-1431. 
Asbtract ( 770 )   HTML ( 0)   PDF (1975KB) ( 583 )  
Related Articles | Metrics
Connected channels were found somewhere in the feature space in traditional mean shift method. Consequently, to accomplish the segmentation by only one trial often leads to unsatisfactory result, especially in weak edges or regions without Gaussian character. To solve this problem, the authors design a hierarchical segment method based on mean shift technique. Upon on the original sampling data, clustering is performed iteratively with different bandwidths on the centers gained previously. Thus, a tree structure is established between the nodes in different levels. According to the inheritance relationship, the leaf nodes are merged and classified into different categories finally. The method was implemented and tested in both gray and color images. Compared with the traditional mean shift method, the hierarchical method has advantage to reserve the detaisl in the same scale. Also, although the multiple time clustering is needed, the computation cost will not increase obviously due to the decreasing sample sizes and varied bandwidths. The hierarchical mean shift method has been proved to be promising in the experiments. But more theoretical analysis is required and the method also needs to be improved by more experiments.
A Scheme for Steganography Based on Triangular Partition of Digital Images
Yu Jiande, Song Ruixia, and Qi Dongxu,
2009, 46(9):  1432-1437. 
Asbtract ( 581 )   HTML ( 1)   PDF (2161KB) ( 778 )  
Related Articles | Metrics
Proposed in this paper is a non-uniform partition scheme for digital images, by which the digital image region is triangulated based on the gray value of pixels. A steganography algorithm is obtained using this partition method. Fitting the given data, the gray value of the pixels with the least squares method, a self-adaptive non-uniform partition algorithm is realized. The detailed non-uniform triangulation process for an image is shown. Recording the triangulation information of the secret image as quatemary number, triangulating the carrier digital image accordingly and embeding the triangulation information and gray information of the secret image into the carrier digital image. A new steganography algorithm is obtained by utilizing the image reconstruction under triangulation. Saving coding and decoding time is its outstanding advantages. The new algorithm is applied on several kinds of images, and it is shown that the hidden data in camouflage image is difficult to perceived, the reconstruction image is satisfying, and the new algorithm is feasible.
Automatic Detection on Clustered Microcalcifications on Mammograms Based on 2D Particles
Wang Lei, Zhu Miaoliang, Deng Liping, and Yuan Xin
2009, 46(9):  1438-1445. 
Asbtract ( 521 )   HTML ( 3)   PDF (1272KB) ( 614 )  
Related Articles | Metrics
Clustered microcalcifications (MCs) in mammograms can be an important early sign of breast cancer for women and are still very difficult to reliably detect by either radiologists or computer-aided diagnosis (CAD) system. Traditional filtering based detection methods are vulnerable to complex textures on mammograms. And morphology based methods have some problems in determining the structure elements. To overcome these shortcomings, a novel detection method based on 2-dimensional particles is proposed in this paper. First, a new segmentation algorithm is used to obtain 2-dimensional particles on mammograms. Then suspicious particles are picked up according to their contrasts and background intensities. An inpainting based method is applied to extract features from these suspicious particles, followed by using a classifier to separate the microcalcifications from normal particles. Experiments on the DDSM database show that the new detection method can obtain 93.3% true positive rate with average 3.1 false positives per image. Compared with the original fast marching algorithm, the new segmentation method proposed greatly reduces the time consumption in 2-dimensional particles segmentation on mammograms. Experiments on the DDSM database show that the average time consumption of the new segmentation method decrease by nearly nineteen-twentieth. The new segmentation method presented can also be used in other image processing applications.
Kernel Regression Method for Fitting Surface of Scattered Points
Zhao Liang, Zhao Chunxia, and Zhang Erhua
2009, 46(9):  1446-1455. 
Asbtract ( 515 )   HTML ( 0)   PDF (2215KB) ( 768 )  
Related Articles | Metrics
The fitting surface of scattered points is a basic problem in computer graphics. This paper proposed a new way to reconstruct meshes from unorganized points, which uses a mature technique nonparametric filter in 2D signal processing. This method generates a order-n continuous surface to guarantee the continuity of the surface, and the user can define any type of mesh topology. It’s easy to adjust the density of the mesh points in the region of interest or where the curvature is large. And the LOD model is easy to set up. The accuracy of the fitting can be modified by the filter parameter, and the direction of the filter is adaptive to maintain the characteristic of the result surface. On the other hand, it avoids the time consuming reconstructing process like iterative subdivision surface, Delaunay triangulation and the resampling in point cloud data. The robustness of the method is better when dealing with noisy and nonuniform sampling data cloud. The experiments show that this algorithm generates accurate continuous surfaces, and becomes more efficient. If only the surface and its first derivative should be estimated, the Nadaray-Watson fast algorithm reduce the time complexity of the algorithm to O(N), far less then other surface reconstructed methods. And some useful information such as the density of local points cloud and the normal vectors of the vertexes on the mesh can be estimated in the process. The surface constructed by this algorithm can retains all the advantage listed above on DEM data. But if the points cannot be projected onto a 2D plan, the reconstructed process will include generating basic meshes and stitching the surface path. And the continuity on the margin cannot be guaranteed.
A Source Camera Identification Method Based on the Combination of OC-SVM and MC-SVM
Wang Bo, Kong Xiangwei, and Fu Haiyan
2009, 46(9):  1456-1461. 
Asbtract ( 740 )   HTML ( 0)   PDF (717KB) ( 565 )  
Related Articles | Metrics
The multi-class classifier used in the existing source camera identification algorithms usually leads to numbers of problems, such as unavoidable false classification of the images out of the training models, decreasing accuracy as camera models increasing and the lack of expansibility. Focusing on these problems, a method for source camera identification based on the combination of one-class SVM and multi-class SVM is proposed in this paper. By solving covariance matrix equation, the authors reduce the perturbing term introduced by the pipeline of imaging, and improve the estimating precision of CFA interpolation coefficients. To obtain a more efficient feature space for classification, the sequential forward feature selection method is implemented to construct feature vector as the input of the classifier. The strategy using the combination of OC-SVM and MC-SVM as the classifier in the approach provide an effective approach for the classification of images out of training models and system’s expansibility. In the combination, the OC-SVM is used to expose the images that captured by an unknown camera model, and the MC-SVM trains a new multi-class model to classify the image source according to the positive results of the OC-SVM. The experiments indicate that average accuracies of 90.4% for camera model identification from 28 cameras, and 79.3% for three outlier camera model detection are obtained respectively in this method.
Web Image Clustering Based-on Multiple Instance Learning
Lu Jing and Ma Shaoping
2009, 46(9):  1462-1470. 
Asbtract ( 450 )   HTML ( 0)   PDF (1989KB) ( 539 )  
Related Articles | Metrics
In image retrieval and annotation systems, multiple instance learning (MIL) has been studied actively. Since each image contains several regions and each region can be regarded as an instance, the image retrieval is then transformed into a MIL problem. The key assumption of MIL is that: a bag is positive if at least one of its instances is a positive example; otherwise, the bag is negative. In the setting of MIL, each image is viewed as a bag of semantic regions. Most of the state-of-the-art methods solve the MIL problem in a supervised way. However, two unsupervised frameworks for clustering multi-instance objects based on expectation maximization (EM) approach and iterative heuristic optimization are proposed respectively. Under each framework, three new algorithms are introduced to find users’ interests on specific Web images without any manual labeled data. The EM approach takes instances as members of concepts. Each concept is modeled by a statistical process. Then a cluster of MI objects is considered as a multinomial distribution over the components of the mixture model of instances. The other framework is based on the idea of iterative heuristic optimization. It selects an instance from each MI object in every iteration process to determine the clustering model of MI objects. Hence it transforms the multi-instance object clustering problem into a normal clustering problem. Furthermore, all the algorithms are evaluated on both the MUSK benchmark data sets and a real-world Web image dataset downloaded from Yahoo. And comparative studies show the effectiveness of the proposed algorithms.
Ant Colony Planning Algorithm Based on Planning Graph
Chai Xiaolong, Jiang Yunfei, and Chen Aixiang
2009, 46(9):  1471-1479. 
Asbtract ( 476 )   HTML ( 0)   PDF (1165KB) ( 446 )  
Related Articles | Metrics
Graphplan is an important algorithm of intelligent planning in recent years. It has promoted great development of intelligent planning. Firstly, the Graphplan algorithm will generate a planning graph by action level expanding and proposition level expanding alternatively. Secondly, a valid plan will be extracted from the planning graph by backtracking in exhaustive way. The plan extracting of the algorithm always consume too much time in this way. And the algorithm is apt to plunge into the local searching. In this paper, a way of plan solution searching by using the ant colony algorithm is given. That is the ACP (ant colony planner) algorithm. In the ant colony algorithm, positive feedback and distributed coordination are used to find the solution path. And the ant colony algorithm even has the characteristic of robustness, thus it has been successfully applied in many applications which are NP-hard problems. The searching of the ACP has the characteristic of global and parallel searching. And ACP has the ability of convergence acceleration in the solution searching. The experiments show that ACP is advantage ous especially in solving the large scale planning problems. To absorb the optimizing technique and the learning technique is a rising way in the study of the intelligence planning. Since the ant colony planning algorithm is just based on the optimizing technique, thus the ant colony planning algorithm is promising to make some better progresses in the study of intelligence planning area by using the ant colony planning method.
A Trust and Reputation System Model for Open Multi-Agent System
Zhao Xiang, Huang Houkuan, Dong Xingye, and He Lijian
2009, 46(9):  1480-1487. 
Asbtract ( 428 )   HTML ( 0)   PDF (1289KB) ( 464 )  
Related Articles | Metrics
Trust and reputation are important to effective interactions in open multi-agent system. The FIRE model, which was designed for solving problems in open environment, was a most recent one. However, consumer’s personal characteristics were not considered in this model, and so the ratings given by consumers just directly reflected the performance of providers. This has weakened the usability of this model. An extended FIRE model is proposed in this paper, in which some personal characteristics are introduced for the consumers. The personal characteristics include two aspects: the expected service quality assumed by the consumers and the attitudes taken by the consumers. Both can affect the consumers’ ratings of the same service. As a result, the ratings given by consumers do not directly reflect the performance of the providers any more, and this makes the model closer to the reality. At the same time, when a consumer computes the trust ratings of the providers for selection purpose, the computation depends more on the interaction trust of the consumers providing evidences, and this reduces the effort of the communication between agents. The computational results show that the E-FIRE is comparable with the FIRE when fewer interactions are needed, and it is better when more interactions are needed.
FRESG:A Kind of Fuzzy Description Logic Reasoner
Wang Hailong, Ma Zongmin, Yin Junfu, and Cheng Jingwei
2009, 46(9):  1488-1497. 
Asbtract ( 626 )   HTML ( 2)   PDF (1480KB) ( 638 )  
Related Articles | Metrics
Because description logic serves as the theoretical counterpart of the semantic Web and provide reasoning supports for it, the description logic reasoners are the basic supporting bodies for the semantic Web coming into use. Based on the fuzzy description logic F-ALC(G), the authors design and implement a description logic reasoner, named FRESG1.0, which can support the representation and reasoning of fuzzy customized data types and fuzzy customized data type predicates. Briefly introduced in this paper, are the reasoning services provided by FRESG1.0 and the program language of it. Also described are the overall architecture of FRESG1.0 and the design and implementation of its main components, where more attention is given to the features of the reasoner as well as the algorithms and technologies adopted in the process of implementations. From the testing cases, it can be seen that FRESG1.0 has more powerful reasoning capabilities in the representation and reasoning of the fuzzy customized data type information than other fuzzy description logic reasoners. FRESG1.0, which is open source under a very liberal license, is a complete and capable fuzzy description logic reasoner with acceptable good performance, extensive middleware, and a number of unique features, so it lays a foundation for its future investigations and extensions.
A Hybrid Algorithm for Bayesian Network Structure Learning
Ji Junzhong, Hu Renbing, Zhang Hongxun, and Liu Chunnian
2009, 46(9):  1498-1507. 
Asbtract ( 601 )   HTML ( 0)   PDF (1406KB) ( 591 )  
Related Articles | Metrics
Bayesian networks (BNs) are an important theory model within the community of artificial intelligence, and also a powerful formalism to model the uncertainty knowledge in the real world. Recently, learning a BN structure from data has received considerable attentions and researchers have proposed various learning algorithms. Especially, there are three efficient approaches, namely, genetic algorithm (GA), evolutionary programming (EP), and ant colony optimization (ACO), which use the stochastic search mechanism to tackle the problem of learning Bayesian networks. A hybrid algorithm, combining constraint satisfaction, ant colony optimization and simulated annealing strategy together, is proposed in this paper. First, the new algorithm uses order-0 independence tests with a self-adjusting threshold value to dynamically restrict the search spaces of feasible solutions, so that the search process for ants can be accelerated while keeping better solution quality. Then, an optimization scheme based on simulated annealing is employed to improve the optimization efficiency in the stochastic search of ants. Finally, the algorithm is tested on different scale benchmarks and compared with the recently proposed stochastic algorithms. The results show that these strategies are effective, and the solution quality of the new algorithm precedes the other algorithms while the convergence speed is faster.
A Case Based Agent Multi-Issue Negotiation Model
Tong Xiangrong, Huang Houkuan, and Zhang Wei
2009, 46(9):  1508-1514. 
Asbtract ( 602 )   HTML ( 0)   PDF (930KB) ( 411 )  
Related Articles | Metrics
Multi-agents multi-issue negotiation under incomplete information is a challenge in open environment. However, until now, the strategy of optimal counter-offer generating under incomplete information is not ideal. Previous work usually use indirect approaches to acquire the preferences of opponents through a variety of data mining of other methods such as the researches of Fatima. On the other hand, agents usually have some experiences and domain knowledge which may help them get better negotiation results. This fact inspires the authors to directly investigate negotiation using case-based method. For this purpose, the authors propose an agent multi-issue negotiation model under incomplete information based on cases and game theory. The Cases are regarded as successful interactions and can be reused in future according to the similarity. A Pareto optimal result is proved in this paper. In particular, the optimal counter-offer can ensure the maximal utility of oneself and the maximal similarity of offer for opponents. The computational complexity of the proposed algorithm is polynomial order and it is commonly lower than that of Fatima as long as the scale of cases base is limited to a bounded quantities. Experimental results indicate that the utility and reaching time of the experiments have an advantage over that of human beings and the method of Lin et al. It improves the work of Fatima.
Kernel Matrix Based Incremental Learning Isomap Algorithm
Wang Yaonan, Zhang Ying, and Li Chunsheng,
2009, 46(9):  1515-1522. 
Asbtract ( 853 )   HTML ( 0)   PDF (2021KB) ( 586 )  
Related Articles | Metrics
The Isomap algorithm operates in batch mode, meaning that all data should to be available when training is done. However, in many scenarios the data come sequentially and the effect of the data is accumulated. When new data is added, it takes a long time to update the geodesic distance matrix including all data points. In order to improve the computing speed, a kernel based incremental learning Isomap algorithm (ILIsomap) is proposed. The geodesic distance matrix can be interpreted as a kernel matrix, and then ILIsomap exploits a constant-shifting method to guarantee that the geodesic distance matrix satisfy the Mercer kernel. ILIsomap only needs to calculate the geodesic distance between new data and the original data, making it possible to project test data points onto low-dimensional manifold embedding using a kernel trick as in kernel PCA. Experimental results on Swiss data sets, Helix data sets, and multi-posture face data set demonstrate that ILIsomap reduces the computational complexity, making it possible to quickly visualize low-dimensional manifolds embedding in high-dimensional space.
The Equivalence Between Quantum Mealy Automata and Quantum Moore Automata
Xi Zhengjun, Wang Xin, and Li Yongming
2009, 46(9):  1523-1529. 
Asbtract ( 779 )   HTML ( 6)   PDF (668KB) ( 521 )  
Related Articles | Metrics
The proposal of quantum algorithms for factorizing large numbers and of quantum searching algorithm has lead quantum computing to a new period of rapid development. Quantum automata theory has been a very active and fruitful research area, in which many meaningful results have been obtained over the past 10 years. In this paper, the authors firstly define the finite Fock space over an alphabet set, based on which, the definitions of quantum Mealy automata and quantum Moore automata are introduced. These definitions accord with quantum mechanics hypothesis. To consider a closed quantum system consisting of two kinds of automata without influence from the environments, based on the principle of quantum measurement, an investigation is made of the evolution of quantum Mealy automata and quantum Moore automata in detail. Then presented are the languages that are considered as quantum machine languages generated by quantum Mealy automata and quantum Moore automata utilizing the basic properties of density operator. It is well known that a complete measurement can always be represented as an unitary operation1and partial tracing. Considering this principle it is shown that quantum Mealy automata are equivalent to quantum Moore automata under the condition of the pure states.
Research on Blog Opinion Retrieval Based on Probabilistic Inference Model
Liao Xiangwen, Cao Donglin, Fang Binxing, Xu Hongbo, and Cheng Xueqi
2009, 46(9):  1530-1536. 
Asbtract ( 466 )   HTML ( 1)   PDF (831KB) ( 619 )  
Related Articles | Metrics
In recent years people and enterprises have paid more and more attention on the fast growth of new media blog (derived from “Web blog”) Web sites. Blogs constitute a huge blogsphere by trackbacking and recommending each other. In this blogsphere, people could freely express their opinion and feelings about topic they interested in, and could also comment on new product in the market. Retrieving blogger’s opinion on lead story and hot topic is very important for applications such as market survey, network public opinion discovery and warning. The goal of blog opinion retrieval is to retrieve the blog post that not only relate to given query but also has comment on the query. The paper introduces probabilistic inference model into blog opinion retrieval, and presents an algorithm based on probabilistic inference model. The model combines topical scoring and sentiment scoring to a uniform probabilistic inference theory model, could effectively reveal the topical facets between blog post and query and the strength of sentiments about the given query and then combine the resulting topical score and sentiment score to constitute final score. Experiment result shows that the algorithm could effectively model the topical facets and sentiments, and could also identify the opinions about given query.
Survey of Security on Identity-Based Cryptography
Hu Liang, Liu Zheli, Sun Tao, and Liu Fang
2009, 46(9):  1537-1548. 
Asbtract ( 834 )   HTML ( 0)   PDF (1132KB) ( 1919 )  
Related Articles | Metrics
Nowadays, identity-based encryption (IBE) has become a new research direction of public key encryption, and security is the most important factor for constructing an IBE scheme. When designing a public encryption scheme, security goals are usually considered by the standard of attack models. And then, the definition of security combines both security goal and attack models. After analyzing the proposed IBE schemes, the authors present the formalized definition of IBE security and provide the comparison with security of traditional public key encryption. They also summarize the various mathematical assumptions on which the security relies and study the relations among assumptions. Furthermore, transformation rules among securities and transformation methods to reach higher security are described, and it is pointed out that these transformation methods all use some test in the construction, i.e., they give some additional disposal of ciphertext or factor construction in the encryption phase; meanwhile, they can verify the validity of ciphertext in the decryption phase. Later, also contrasted are the IBE schemes on the security and efficiency, which indicates that the transformation to reach higher security will reduce efficiency. Finally, the disadvantages of IBE, future research directions and open problems are summarized.
Image Forensics for Blur Detection Based on Nonsubsampled Contourlet Transform
Wang Junwen, Liu Guangjie, Dai Yuewei, Zhang Zhan, and Wang Zhiquan
2009, 46(9):  1549-1555. 
Asbtract ( 521 )   HTML ( 3)   PDF (1772KB) ( 608 )  
Related Articles | Metrics
Digital image passive blind forensics aim to distinguish the integrity and authenticity of digital images.After the image is tampered, in order to eliminate the visual edge distortion caused by splicing during the process of forgery, some post-processing operations are usually utilized to eliminate the tampering traces. For example, manual blurring is one of the common approaches. Based on this condition, a method which can detect manual blurring from the tampered image is proposed. Firstly, the features of the image edges are analyzed by using nonsubsampled contourlet, by which the image edges can be classified. Then the authors can distinguish whether the edge is blurred by the differences between the normal edge and the blurred edge. Finally, local definition which is defined to indicate the differences between the manual blurring and out of focus, can be used to locate the manual blurred traces. Experimental results show that this method can detect possible manual blurring in the given tested images and locate the tampered boundary with a relative high accurate rate. The more serious the image blurred edge is, the better performance the method has. Compared with the other blurring detection approaches, the method presented has the capability of pixel-location.
Program Clustering for Comprehension Based on Fuzzy Formal Concept Analysis
Xu Jiaqing, Peng Xin, and Zhao Wenyun
2009, 46(9):  1556-1566. 
Asbtract ( 496 )   HTML ( 2)   PDF (1295KB) ( 588 )  
Related Articles | Metrics
Program comprehension is very important for software developers, especially when a software system is a complex one, or a legacy one, or lack necessary documentations. There are lots of techniques available to do program comprehension, among which program clustering is a very popular one. Program clustering can help program comprehension and architecture analysis by clustering program units relevant to certain requirement or design element. Software developers can then gain a high-level and more comprehensive view of a software system. Formal concept analysis (FCA) has been widely adopted in text-analysis-based program clustering. However, existing FCA-based methods are based on one-valued attributes (with or without), and cannot deal with the fuzzy information in program clustering. In this paper, a text-analysis-based program clustering method using fuzzy FCA is proposed. The method includes a process of fuzzy attribute computation and a construction algorithm for fuzzy concept lattice, which can ensure a complete and non-redundant lattice. A semi-automatic analysis tool has been developed for the method and applied in a case study of a commercial bookstore management system. Results of the experiment show that the method can help obtain clusters with sound fuzzy features for program comprehension, which are also recognized by former developers of the system.
An Automated Method of Test Program Generation for Compiler Optimizations Based on Process Graph
Tao Qiuming, Zhao Chen, and Wang Yongji,
2009, 46(9):  1567-1577. 
Asbtract ( 598 )   HTML ( 0)   PDF (1679KB) ( 716 )  
Related Articles | Metrics
Compiler is the fundamental tool for software development, and its dependability is undoubtedly significant. As an effective and efficient means to assure the quality of compilers, automatic testing has attracted much attention from academic world and industry. Optimizations are indispensable functions of modern compilers, and in recent years, the advanced optimizations which are based on data dependence analysis have become important development issues of modern compilers. To automatically test these optimizations, a method of test program generation is proposed, which can automatically generate test programs according to specified features of test program’s data dependence. The authors have designed LoSpec language for specifying test programs, realized automatic test program generation by adopting a special program model for easily representing data dependence—process graph, as the intermediate model, and developed an automated test tool, LoTester. Compared with other existing methods, this method is more effective for advanced optimizations and combined optimizations, and has a higher degree of automation. As an automatic test tool, LoTester can be used to conduct abundant testing in early development phases of optimizing compilers, which is favorable for finding bugs earlier. So far LoTester has been applied in the development of EECC, an optimizing compiler oriented at multimedia applications, and has yielded good results.
Real-Time System Resource Conflict Checking Based on Time Petri Nets
Zhou Hang, Huang Zhiqiu, Hu Jun, and Zhu Yi,
2009, 46(9):  1578-1585. 
Asbtract ( 500 )   HTML ( 1)   PDF (813KB) ( 534 )  
Related Articles | Metrics
Time Petri net is widely used to model real-time systems and to analyze system performance. Conflicts are important behaviors of Petri net and its expansion model. To solve conflicts is a key for analyzing model dynamic behaviors. Now, some conflict checking approaches are provided to check conflicts of some expansion Petri Nets which include stochastic Petri Nets, hybrid Petri net (HPN) and interval speed continuous Petri Nets(ICPN). But these approaches can not be used to check conflicts of time petri nets. Because time constraints are imported, then time Petri net’s enabling and triggering semantics are more complex than that of Petri net, and the conflict checking of time Petri net is more difficult than that of Petri net. For accurately calculating time intervals and probabilities of conflicts, delay time intervals of transitions are defined based on time constraints when transitions are durative enabling, and the soundness and completeness of this definition are discussed. An approach is proposed to check resources conflicts of time Petri Nets based on time and resource constraints. The approach availability is verified by an example analysis. The results show that applying the approach to check the resource conflicts of real-time systems based on time Petri net modeling is accurate and feasible.
Category Distribution-Based Feature Selection Framework
Jing Hongfang, Wang Bin, YangYahui, and Xu Yan
2009, 46(9):  1586-1593. 
Asbtract ( 617 )   HTML ( 0)   PDF (851KB) ( 565 )  
Related Articles | Metrics
Text categorization is an important technique in data mining domain. Extremely high dimension of features makes text categorization processing complex and expensive, and thus effective dimension reduction methods are extraordinarily desired. Feature selection is widely used to reduce dimension. Many feature selection methods have been proposed in recent years. But to the authors’best knowledge, there is no method that performs very well on unbalanced datasets. This paper proposes a feature selection framework based on the category distribution difference of features named category distribution-based feature selection (CDFS). This approach selects features that have strong discriminative power using distribution information of features. At the same time, weights can be flexibly assigned to categories. If larger weights are assigned to rare categories, the performance on rare categories can be improved. So this framework is suitable for unbalanced data and highly extensible. Besides, OCFS and feature filter based on category distribution difference can be viewed as special cases of this framework. A number of implementations of CDFS are given. The experimental results on Reuters-21578 corpus and Fudan corpus (unbalanced datasets) show that both MacroF1 and MicroF1 by implementations of CDFS given in this paper are better than those by IG, CHI and OCFS.
Research on Personalized Information Retrieval Based on User’s New Interest Detection
Zou Bowei, Zhang Yu, Fan Jili, Zheng Wei, and Liu Ting
2009, 46(9):  1594-1600. 
Asbtract ( 708 )   HTML ( 0)   PDF (852KB) ( 506 )  
Related Articles | Metrics
An important characteristic of next generation search engine is personalization. Personalized information retrieval (PIR) focuses on users. It captures users’ interest in different kinds (explicit, implicit interest and interest of similar users). These information of users are integrated and used to improve the result of information retrieval system. Personalized information retrieval can grasp the users’ retrieval intention and find personalized results. The authors propose the new interest detection task, which identifies the queries containing users’ new retrieval interest by the change of retrieval object. Simultaneously, by using and improving the TextTiling algorithm, the retrieval system is enabled to automatically choose the appropriate dynamic threshold and detect the change of users’ interest. The retrieval information and labeled answers of users are used to establish the experimental dataset. The evaluation matrix includes false alarm rate, miss alarm rate, and cost of detection. In the experiment of personalized information retrieval system, the improved TextTiling algorithm improves the new interest detection system by 16.4%. What’s more, the new interest detection task improves the performance of the personalized information retrieval system is by 3.8%. The experiment shows that mining users’ interest with this method can decrease the false information in users’ models and improve the result of precision of users’ interest detection.