ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2012, Volume 49 Issue 2
Analyzing and Modeling of Nodes Churn for Large-Scale P2P Reliable Media Streaming
Yue Guangxue, and Li Renfa
2012, 49(2):  217-230. 
Asbtract ( 529 )   HTML ( 1)   PDF (2619KB) ( 465 )  
Related Articles | Metrics
With the springing up and development of broad-band network technology, such as IPV6, and 3G, etc., media streaming has become an important application of Internet. Because of the media streaming services high bandwidth requirement and long duration, providing larger scale media streaming service over the Internet is a challenging problem. In existing P2P networks, the churn caused by nodes dynamic behavior has serious effect on reliability, usability, service response speed and lifetime of P2P large-scale reliable media streaming service. Churn is one of the main problems faced by P2P large-scale reliable media streaming service. Consequently, analyzing the impact of churn on network and finding approaches for reasonable and effective inhibitory mechanisms have become pivotal research issues in this field. This paper surveys the existing theories and methods about churn. Based on the impact on network performance rendered by dynamic behavior of nodes, the paper puts forward models to measure the important indicators such as churn period, churn frequency, and churning node connections. It also presents rules and strategies to handle churn. The simulation results show that the model can effectively reduce churn, and improve the network performance and service quality for P2P reliable media streaming. Therefore, the P2P reliable media streaming network can be more stable and get sustainable serving ability.
A Game Control Scheme Based on Prediction of Trust and Service for Wireless Access Service
Gui Jinsong and Wu Min
2012, 49(2):  231-242. 
Asbtract ( 439 )   HTML ( 0)   PDF (2184KB) ( 354 )  
Related Articles | Metrics
Selfish mobile nodes not only do not forward other nodes’ packets without payoff, but also have an incentive to occupy more system resources. Therefore, a game control scheme based on prediction of trust and requested service is proposed in this paper. The future levels for trust and requested service under the given attribute conditions are predicted in the scheme based on the current trust grade and service level. Based on the above, acceptance probability and decision-making condition are presented. Upon receiving a request from a mobile node, if a wireless access point decides to deal with it based on the acceptance probability, the mobile node’s future levels of trust and requested service are predicted by using Bayesian network model. By using the predicted values, the acceptance probability, and the mobile node’s cheat probability that is calculated by the system according to history records of the client’s paying for access services, the wireless access point makes a decision whether to respond the mobile node’s request or reject it. The exhibited example and the simulation results show that the scheme has the favorable incentive effect and service response capacity to clients, and its game cost is smaller than that of the existing works.
An Accurate Fingerprinting Localization Algorithm Based on Common Beacons
Zhao Fang, Luo Haiyong, Ma Yan, and Xu Junjun
2012, 49(2):  243-252. 
Asbtract ( 614 )   HTML ( 1)   PDF (2086KB) ( 741 )  
Related Articles | Metrics
WiFi fingerprinting localization is currently the most promising method to building large-scale urban localization systems for both indoor and outdoor environments. To reduce the negative effect caused by the fluctuation of the received signal strength (RSS) and improve the positioning accuracy and robustness, an accurate radio fingerprinting localization algorithm based on common beacons is presented in this paper. It formulates the object localization as a Bayesian estimation problem. By employing Gaussian mixture model to accurately represent the complex training fingerprint pattern, and using a Markov-chain state transition model and an adaptive grid collection selection method based on the posterior probability to exploit the object’s past states and environment layout information, the algorithm not only can reduce the size of grid search space, but also can restrict the impossible position jump during the moving process and improves the localization accuracy and robustness. Practical experimental results show that the proposed positioning method can achieve localization error within 3 m with 90 percent probability, which works much better than the single Gaussian localization model.
Design of MAC Scheme in Multi-Channel Multi-Radio WMNs
Luo Juan, Pan Chen, and Li Renfa
2012, 49(2):  253-260. 
Asbtract ( 482 )   HTML ( 0)   PDF (1587KB) ( 1467 )  
Related Articles | Metrics
Wireless mesh network is a communications network made up of radio nodes organized in a mesh topology. Aimed to multi-channel assignment in wireless mesh network, a new MAC scheme is proposed named MRMC-MAC. MRMC-MAC scheme includes default receiver channel allocation, distribution of main switchable channels, node communication and the update of main switchable channels. In MRMC-MAC, channel allocation is on the basis of a received load-based allocation algorithm. The received load is defined as priority parameter in channel allocation, which can ensure that heaviest received load node has the highest priority to get the light load channel and light received load nodes share the same default receiver channel. Thus, the load between the channels can be balanced. In addition, the hidden terminal problem in multi-channel multi-radio network is analyzed, and some solutions to solve the hidden terminal problem are presented. Simulation on MRMC-MAC is conducted using NCTUns 5.0 platform and the simulation results show that network capacity has been greatly improved in MRMC-MAC.
Survey of End-to-End Multipath Transport Protocol
Yang Wang, Li Hewu, Wu Qian, and Wu Jianping,
2012, 49(2):  261-269. 
Asbtract ( 628 )   HTML ( 0)   PDF (1001KB) ( 576 )  
Related Articles | Metrics
With the deployment of multihoming and multipath routing, multipath between wired Internet hosts becomes more and more common. Accompanied by the rapid development of various wireless transmission technologies in recent years, wireless access has become a main access method of the Internet. The swift increase on wireless terminals with multiple network interfaces and the prompt spread of various heterogeneous wireless networks have made it possible for terminals to carry out concurrent multipath transfer. Concurrent multipath transfer can achieve the bandwidth aggregation of multipath and thus greatly improve network utilization, and therefore has become a hot research topic. Meanwhile, the diversity of multipath poses challenges to multipath transport protocol design, such as how to deal with the receiving buffer blockings caused by out-of-order delivery of data packets, the throughput suppression caused by improper multipath packet scheduling, the bandwidth underutilization caused by improper cooperation of multipath, and unfair resource allocation caused by multipath heterogeneity. In this paper, how the existing reliable multipath transport protocols solve the above challenges is surveyed. In conclusion, none of these existing protocols have completely solved the above challenges. Some open research problems and future trend are also stated in the paper.
TCLM-P2P: Task Collaboration Logic Model Oriented to P2P Community
Wang Yang, Wang Ruchuan, Yan Yuanting, Han Zhijie, and Zhao Baohua
2012, 49(2):  270-277. 
Asbtract ( 494 )   HTML ( 1)   PDF (1535KB) ( 380 )  
Related Articles | Metrics
Traditional P2P networks mainly are applied to file sharing and instant message fields. However, how to perform the task collaboration based on P2P community is a challenging job. The former research work indicated that the task collaboration in P2P network had been greatly restricted by free riding behaviors. To realize effective task allocating and task collaborating in P2P network environment, this paper presents a task collaboration logic model oriented to P2P community. Based on agent and multi-agent theory, the paper firstly introduces some concepts including the peer body, half-peer body and P2P community; then the TCLM-P2P is presented including some collaboration axioms and rulers. In order to enhance the incentive mechanism, virtual score becomes the main goal which each peer endeavor pursues. In addition, based on the contract net protocol, a task collaboration algorithm is presented. The proposed algorithm is composed of two phases. One is the task collaboration and the other is the task second bid when some peers fail to complete the former task. Compared with the traditional task collaboration models, the presented model has the feasible incentive mechanism in the process of task allocation and collaboration. The developed prototype and simulation results indicate that TCLM-P2P model is feasible and effective. It could not only incentivize self-interested peer to participate in task allocation and collaboration, but also restrain peer’s free riding behaviors in some degree.
A Topology Constructing Algorithm Based on Punishment and Cultivation in Mobile P2P
Liu Jiaqi, Chen Zhigang, Li Deng, and Ren Zhong
2012, 49(2):  278-285. 
Asbtract ( 408 )   HTML ( 0)   PDF (2823KB) ( 333 )  
Related Articles | Metrics
A topology constructing algorithm based on punishment and cultivation called TCBPC is presented. Aiming at the non-cooperation behaviors such as free-riding, sybil attack and whitewashing, a self-supervising and self-punishing schema is presented to construct the adaptive topology. Based on the above schema, non-cooperation nodes are punished and cooperation nodes are encouraged, so as to guarantee cooperation nodes to achieve services efficiently. A mobile P2P overlay topology is constructed, which integrates the limited state maintenance, local communications, information exchange and global view. Simulation results show that the topology structure constructed by TCBPC algorithm has better scalability, stability, fault tolerance, and improves the searching efficiency.
Network Architecture Design for Data Centers Towards Cloud Computing
Wang Cong, Wang Cuirong, Wang Xingwei, and Jiang Dingde
2012, 49(2):  286-293. 
Asbtract ( 801 )   HTML ( 4)   PDF (2345KB) ( 1383 )  
Related Articles | Metrics
In recent years, the rapid development of cloud computing brings significant innovation in the whole IT industry, which makes the Internet change from translating information data to translating service directly. In cloud computing, enterprises need build their own data centers as privacy clouds or the backbone of public cloud to deploy new Internet information services. Traditional network structure and control plane mechanism in current data centers have their own inherent limitations in network capacity and price quality. The current practicalities also cannot support network multi tenanting which is essential for cloud computing. Therefore, this paper proposes a novel data center architecture which just uses low-cost commercial programmable switches and servers to build the data center networks; and also gives a flexible virtual network bandwidth management mechanism based on convex optimization, which supports the coordinated work between programmable switches and 2.5 layer agents resident in servers. Through a dynamic bandwidth allocation algorithm, it can provide more preferably supports to the resource virtualization which is very common in cloud computing applications. Experimental results show that the proposed data center network architecture can reduce the building-cost significantly as well as improve the network throughput remarkably, and that the virtual network management mechanism provides a flexible manner in bandwidth allocation.
Reverse Hash Chain Traversal Based on Binary Tree
Fu Jianqing, Wu Chunming, Wu Jiyi, and Ping Lingdi
2012, 49(2):  294-303. 
Asbtract ( 769 )   HTML ( 3)   PDF (997KB) ( 458 )  
Related Articles | Metrics
An algorithm improving the time and space complexity of reverse Hash chain traversal is proposed. By mapping the traversal of a reverse Hash chain to the postorder traversal of a binary tree, the proposed algorithm reduces the product of the total times of Hash operations and the storage space required to O(n(lb n)2), where n is the length of the reverse Hash chain. Analysis and proof using the property of the binary tree show that the proposed algorithm requires to save only 「lb n +1 nodes at the same time, and needs no more than (「lb n/2+1)n times of Hash operations totally. Compared with other algorithms, the proposed one can be applied to Hash chains with any length, eliminating the limitation that the length of chain must be of 2 integerth power. Then an advanced algorithm is proposed by mapping the traversal of a reverse Hash chain to the postorder traversal of a kary tree, where k is an integer no less than 3, and the space required is reduced to 「log\-k[(k-1)n+1], but the times of Hash operations required is raised to [(「log\-k[(k-1)n+1]-1)k/2+1]n. Finally, another advanced algorithm is proposed by splitting Hash chain to p part before traversing, where p is an integer no less than 2. The times of Hash operations required is reduced to「(lb(n/p)/2+1)n, but the space required is raised to (「lb(n/p)+1)p.
Key Agreement Protocol for Wireless Sensor Networks Using Self-Certified Public Key System
Ren Yongjun, Wang Jiandong, Xu Dazhuan, Zhuang Yi, and Wang Jian
2012, 49(2):  304-311. 
Asbtract ( 530 )   HTML ( 2)   PDF (1259KB) ( 505 )  
Related Articles | Metrics
Wireless sensor networks use small nodes with constrained capabilities to sense, collect, and disseminate information in many types of applications. As sensor networks become wide-spread, security issues become a central concern. The design of key agreement protocols, whose main objective is to provide secure and reliable communication, is one of the most important aspects and basic research field of secure wireless sensor networks.Self-certified public key system, which does not require certification management and has no key escrow problem, is ideal for resource-constrained wireless sensor networks. However, the existing sensor network with key agreement protocols based on self-certified public cryptography is low security and great energy consumption. First of all, after the protocol proposed by Yoon et al is analyzed, it is pointed out that the protocol can not resist the key compromise impersonation attack. Then the idea of the implicit authentication in the MTI protocol families is adopted to devise a new authenticated key agreement protocol for wireless sensor networks using self-certified public key. To our knowledge, the proposed scheme is the first provably secure key agreement protocol for wireless sensor networks based on self-certified public key system. The scheme not only provides greater security, but is the most efficient communication and requires less energy compared with the existing relevant protocols, because of only two-pass message exchanges.
Adaptive Neighbor Multi-Objective Evolutionary Algorithm Based on Hypervolume Indicator
Zheng Jinhua, Li Ke, Li Miqing, and Wen Shihua
2012, 49(2):  312-326. 
Asbtract ( 1141 )   HTML ( 5)   PDF (6730KB) ( 808 )  
Related Articles | Metrics
There are two key factors in designing multi-objective evolutionary algorithms (MOEAs). One is how to ensure the evolutionary procedure approaches to the Pareto optimal solutions set, and the other is how to obtain well distributed solutions set. A tree neighbor containing the relation which represents the close degree of individuals is defined. Along with the Pareto dominance relationship, a density estimation metric—neighbor tree density is proposed to assign the fitness. In order to save the computational cost, a novel algorithm to calculate the exclusive hypervolume indicator is proposed. It is enough to calculate once (similar methods normally need to calculate twice) when evaluating an individuals contribution to total hypervolume. Moreover, if the size of external population exceeds the predefined threshold, the individual which contributes least to the exclusive hypervolume indicator will be eliminated. Based on all of these, an adaptive neighbor multi-objective evolutionary algorithm based on Hypervolume indicator (ANMOEA/HI) is proposed. In order to verify the efficiency of our proposed algorithm, it is tested with other 3 state-of-the-art MOEAs on 7 test problems. Four different kinds of metrics are used to give a fair judgment on their performances. Experimental results demonstrate that the proposed ANMOEA/HI obtains good performance in both convergence and distribution.
Rough Approximation Spaces and Belief Structures in Infinite Universes of Discourse
Wu Weizhi, Mi Jusheng , and Li Tongjun
2012, 49(2):  327-336. 
Asbtract ( 458 )   HTML ( 0)   PDF (933KB) ( 406 )  
Related Articles | Metrics
In rough set theory there exists a pair of approximation operators, the lower and upper approximations; whereas in Dempster-Shafer theory of evidence there exists a dual pair of uncertainty measures, the belief and plausibility functions. To represent uncertainty knowledge in various information systems in crisp and fuzzy environments, general types of belief structures and their inducing dual pairs of belief and plausibility functions in infinite universes of discourse are first introduced. Relationships between the belief and plausibility functions in the Dempser-Shafer theory of evidence and the lower and upper approximations in the rough set theory are then established. It is shown that the probabilities of lower and upper approximations induced from an approximation space yield a dual pair of belief and plausibility functions. And for any belief structure there must exist a probability approximation space such that the belief and plausibility functions defined by the given belief structure are, respectively, the lower and upper probabilities induced by the approximation space. The lower and upper approximations of a set characterize the non-numeric aspect of uncertainty of the available information and can be interpreted as the qualitative representation of the set, whereas the belief and plausibility measures of the set capture the numeric aspect of uncertainty of the available information and can be treated as the quantitative characterization of the set. Finally, the potential applications of the main results to knowledge discovery in intelligent information systems in various situations are explored.
An Effective Model and Algorithm for Community Detection in Social Networks
Lin Youfang, Wang Tianyu, Tang Rui, Zhou Yuanwei, and Huang Houkuan
2012, 49(2):  337-345. 
Asbtract ( 963 )   HTML ( 3)   PDF (1376KB) ( 1138 )  
Related Articles | Metrics
In the research area of community detection in social network, there exist problems such as some algorithms with comparatively satisfactory detection result having high time complexity, current fast algorithms for large scale network resulting in low quality partition results, and lacking of model and mechanism to express and utilize actor and link attributes. To solve these problems, this paper proposes a model of edge stability coefficient and a model of complete information graph that can express the tightness of relation among actors, based on which an effective community detection algorithm is designed and implemented. The proposed model of complete information graph has high generality which makes it applicable to different community detection algorithms that need the input of fused information from actor and link attributes. Experiments show that the algorithm based on models of edge stability coefficient and complete information graph is effective to the problem of community detection in social networks with relatively less time cost. The algorithm is applicable to both weighted and unweighted networks with a comparatively fast speed and high quality partition result as well.
Balancing Method for Skewed Training Set in Data Mining
Li Xiongfei, Li Jun, Qu Chengwei, Liu Lijuan, and Sun Tao
2012, 49(2):  346-353. 
Asbtract ( 596 )   HTML ( 1)   PDF (1206KB) ( 381 )  
Related Articles | Metrics
Classification is one of the important tasks in data mining. The training sets that are extracted for training classifiers are usually skewed. Traditional classification algorithms usually result in low predictive accuracy of minority classes when handling skewed training sets. The existing balancing algorithms only deal with the data sets which contain two classes of cases. In order to balance the training sets that have several classes, an algorithm called SSGP is introduced, based on the idea that little difference lies between the same class cases. SSGP forms new minority class cases by interpolating between several minority class cases that lie together, and makes sure that the number of each minority class case increases at the same speed. It is proved that SSGP would not add noise to the data set. To enhance the efficiency, SSGP adopts the modulus in stead of calculating a lot of dissimilarity between cases. The experimental results show that SSGP can improve the predictive accuracy of several minority classes by running once.
A Feature Selection Method Based on Maximal Marginal Relevance
Liu He, Zhang Xianghong, Liu Dayou, Li Yanjun, and Yin Lijun
2012, 49(2):  354-360. 
Asbtract ( 766 )   HTML ( 0)   PDF (769KB) ( 529 )  
Related Articles | Metrics
With the rapid growth of textual information on the Internet, text categorization has already been one of the key research directions in data mining. Text categorization is a supervised learning process, defined as automatically distributing free text into one or more predefined categories. At the present, text categorization is necessary for managing textual information and has been applied into many fields. However, text categorization has two characteristics: high dimensionality of feature space and high level of feature redundancy. For the two characteristics, χ\+2 is used to deal with high dimensionality of feature space, and information novelty is used to deal with high level of feature redundancy. According to the definition of maximal marginal relevance, a feature selection method based on maximal marginal relevance is proposed, which can reduce redundancy between features in the process of feature selection. Furthermore, the experiments are carried out on two text data sets, namely, Reuters-21578 Top10 and OHSCAL. The results indicate that the feature selection method based on maximal marginal relevance is more efficient than χ\+〗2 and information gain. Moveover it can improve the performance of three different categorizers, namely, nave Bayes, Rocchio and kNN.
Constrained Conditional Random Fields for Semantic Annotation of Web Data
Dong Yongquan, Li Qingzhong, Ding Yanhui, and Peng Zhaohui
2012, 49(2):  361-371. 
Asbtract ( 588 )   HTML ( 1)   PDF (1315KB) ( 459 )  
Related Articles | Metrics
Semantic annotation of Web data is a key step for Web information extraction. The goal of semantic annotation is to assign meaningful semantic labels to data elements of the extracted Web object. It is a hot research topic that has gained increasing attention all over the world in recent years. Conditional random fields are the state-of-the-art approaches taking the sequence characteristics to do better labeling. However, traditional conditional random fields can not simultaneously use existing Web databases and logical relationships among Web data elements, which lead to low precision of Web data semantic annotation. To solve the problems, this paper presents a constrained conditional random fields (CCRF) model to annotate Web data. The model incorporates confidence constraints and logical constraints to efficiently utilize existing Web databases and logical relationships among Web data elements. In order to solve the problem that the Viterbi inference approach of traditional CRF model can not simultaneously utilize two kinds of constraints, the model incorporates a novel inference procedure based on integer linear programming and extends CRF to naturally and efficiently support two kinds of constraints. Experimental results on a large number of real-world data collected from diverse domains show that the proposed approach significantly improves the accuracy of semantic annotation of Web data, and lays a solid foundation for Web information extraction.
An Unsupervised Feature Selection Approach Based on Mutual Information
Xu Junling, Zhou Yuming, Chen Lin, and Xu Baowen,
2012, 49(2):  372-382. 
Asbtract ( 2028 )   HTML ( 6)   PDF (1913KB) ( 1577 )  
Related Articles | Metrics
In data analysis, feature selection can be used to reduce the redundancy of features, improve the comprehensibility of models, and identify the hidden structures in high-dimensional data. In this paper, we propose a novel unsupervised feature selection approach based on mutual information called UFS-MI. In UFS-MI, we use a feature selection criterion, UmRMR, to evaluate the importance of each feature, which takes into account both relevance and redundancy. The relevance and redundancy respectively use mutual information to measure the dependence of features on the latent class and the dependence between features. In the new algorithm, features are selected or ranked in a stepwise way, one at a time, by estimating the capability of each specified candidate feature to decrease the uncertainty of other features (i.e. the capability of retaining the information contained in other features). The effectiveness of UFS-MI is confirmed by the theoretical proof which shows it can select features highly correlated with the latent class. An empirical comparison between UFS-MI and several traditional feature selection methods are also conducted on some popular data sets and the results show that UFS-MI can attain better or comparable performance and it is applicable to both numerical and non-numerical features.
A Fast and Adaptive Object Tracking Method
Li Shanqing, Tang Liang, Liu Keyan, and Wang Lei
2012, 49(2):  383-391. 
Asbtract ( 714 )   HTML ( 3)   PDF (3174KB) ( 523 )  
Related Articles | Metrics
Robust object tracking is still a challenging research topic due to dynamical changes of illumination, viewpoints, camera jitter and partial occlusion. To handle such problems, this paper presents an adaptive object tracking method based on co-training and particle filtering algorithms. An online learning scheme with cooperation of two visual classifiers is designed to alleviate the drifting problem. The histogram of oriented gradients (HOG) and local binary pattern (LBP) are used to describe the object appearance. Two SVM classifiers are constructed separately based on the above two visual features, and online updated by the co-training framework. This updating strategy considers the same classification problem from two complementary viewpoints which successfully overcome the error accumulation problem. In order to reduce the searching state space, we introduce a dynamical model and importance sampling from the ICONDENSATION framework to improve the precision and efficiency of the sampling procedure. A correction term is introduced to decrease the weights of false positive samples, and hereby improve the performance of object tracking and classifier updating. Experimental results on four datasets verify the effectiveness and robustness of the proposed adaptive tracking method. Without decreasing the tracking performance, our method also achieves a speedup of 25 times for object state estimation compared with the traditional sliding window algorithm.
Simulation-Based Non-Linear Methods for the Estimation of Execution Cycles of ARM Programs
Kong Liangliang, Jiang Jianhui, Xiao Jie, and Jiang Yuanyuan
2012, 49(2):  392-401. 
Asbtract ( 641 )   HTML ( 0)   PDF (2202KB) ( 348 )  
Related Articles | Metrics
In order to accurately estimate execution cycles of programs running on ARM architectures as soon as possible, a simulation-based non-linear estimator is proposed. The estimator consists of two cascade modules: profiling program function module and program execution time prediction module. The module of profiling program function can be directly implemented by Sim-profile, an instruction-accurate simulator. According to the non-linear behavior of the program execution time in advanced processors and the dynamic instruction counts during program executions, the program execution time prediction module is implemented by an artificial neural network (ANN). However, besides the problem of local minimization, ANN is not suited to solve small-sample set regression. It depends on the priori knowledge of designers, which could determine the topology of the model and finally impact its performance. In order to conquer the limitations of ANN, a non-linear method based on the least squares support vector machine (LS-SVM) is further proposed to map the number of executed instructions into execution cycle counts. Experimental results show that simulation-based non-linear estimators implemented with the two non-linear methods, especially the LS-SVM based method, can achieve higher precision of estimating program execution cycles at lower simulation cost.
Representing and Perceiving Environment of Complex Self-Adaptive Multi-Agent Systems
Dong Menggao, Mao Xinjun, Guo Yi, and Qi Zhichang
2012, 49(2):  402-412. 
Asbtract ( 617 )   HTML ( 2)   PDF (3129KB) ( 477 )  
Related Articles | Metrics
Implementation of a self-adaptive system should be on the premise of explicitly representing and efficiently perceiving its environment, which is a challenge to the self-adaptive system research. In this paper, the autonomous entities in self-adaptive systems are abstracted as agents, and the complex self-adaptive systems are regarded as multi-agent organizations. Based on dynamic binding, we present an adaptive mechanism and its supporting framework to develop self-adaptive agents. In the framework, environment is regarded as the first-class abstraction, and a language is provided to abstract and describe the environment in which self-adaptive multi-agent organizations are situated. To perceive the environment efficiently, two approaches are presented based on event publish-subscrib and softsensor, and also the method of dynamic associating the relationship between softsensor and environment. Based on the approaches, the developed complex self-adaptive systems can represent their environment explicitly and perceive the environment transparently, and it is easy to maintain and upgrade those systems. This paper introduces the supporting platform SADE for the above mechanism, technology and language. In addition, a case study is presented to illustrate the feasibility and effectiveness of the proposed approaches.
Real-Time Systems Contact Checking and Resolution Based on Time Petri Net
Zhou Hang, Huang Zhiqiu, Zhu Yi, Xia Liang, and Liu Linyuan
2012, 49(2):  413-420. 
Asbtract ( 521 )   HTML ( 1)   PDF (1349KB) ( 353 )  
Related Articles | Metrics
Time Petri net is widely used to model real-time control systems. Contacts are important behaviors of Petri net and these expansion models. To solve the contacts is a key of analyzing models dynamic behaviors. Because the time constraints are adhibited, time Petri net enabling and triggering semantics are becoming more complex than Petri net, and contacts checking and resolution are getting more difficult. Time intervals of transitions are defined when these transitions are durably enabled, and this definition correctness is discussed. An approach is proposed to check contacts of time Petri net and a contacts resolution approach based on changing time constraints of transitions is defined and proved. The approach availability is verified by an example analysis. The results show that applying our approach to check the resource contacts of time Petri net is feasible.
JavaScript Typing System with Prediction
Li Shisheng, Cheng Buqi, Li Xiaofeng, Sun Guangzhong, and Chen Guoliang,
2012, 49(2):  421-431. 
Asbtract ( 526 )   HTML ( 2)   PDF (3347KB) ( 368 )  
Related Articles | Metrics
As the Internet and the World Wide Web become more and more popular nowadays, and the JavaScript programming language is becoming a key role in Web browsers, investigation on the behavior of JavaScript applications is important to improve Web browser’s performance and user experience. Traditional study believes that, the dynamic typing nature of the JavaScript language is the major performance bottleneck. So the main optimizations of most advanced mainstream JavaScript engines are all focused on dynamic typing problems. To learn the dynamic typing nature of JavaScript language in depth, two novel predication-based algorithms, called “type prediction” and “position-based inline caching”, are introduced to tackle the problems. With these algorithms, the typing system of JavaScript language is studied systematically and the techniques are evaluated with a representative JavaScript performance benchmark—SunSpider. In experiments with the SunSpider applications, the predication-based algorithms can identify the types with 99% accuracy on average. And so it is believed that although the JavaScript language provides abundant dynamics with its typing system, the actual applications do not really use all the features and hence their behaviors are static at most time. This is the first time that such discovery is made and published.
Triangulation of Implicit Surfaces via Shell-Space Partition
Gao Shanshan, Zhang Caiming, Zhou Yuanfeng, and Bo Pengbo
2012, 49(2):  432-442. 
Asbtract ( 559 )   HTML ( 0)   PDF (3863KB) ( 327 )  
Related Articles | Metrics
A novel and robust method is presented for implicit surface triangulation problem. Particle system is used for sampling an implicit surface whose fission and death are guided by Gaussian curvature. This strategy leads to curvature adaptive samplings, so there are more small triangles in the high curvature region of constructed mesh. Triangular mesh can approximate the implicit surface better. More sample points are obtained by extending a proper distance along each normal vector. These new sample points are used as a sampling on surface of the shell space. Delaunay tetrahedron of these sample points fills in the shell space. Finally, triangles around zero set form a triangulation of the implicit surface. Compared with the existing methods, our method is more robust, and can achieve high quality model without post processing. Experiments are also conducted to demonstrate the efficiency of this new method.
A Test Compression Scheme Based on LFSR Reseeding and Optimized Coding
Chen Tian, Liang Huaguo, Wang Wei, Yi Maoxiang, and Huang Zhengfeng
2012, 49(2):  443-451. 
Asbtract ( 426 )   HTML ( 0)   PDF (971KB) ( 527 )  
Related Articles | Metrics
The high density and large-scale IC meets lots of problems during testing, such as huge amount of test data, excessive test power dissipation and so on. This paper presents a scheme of test data compression, which optimizes a set of deterministic test cubes by encoding approach before using linear feedback shift register (LFSR) reseeding. The deterministic test set is obtained by using an automatic test pattern generation (ATPG) software. The test set is compressed to a seed set and stored in ROM on the chip. In the process of compression, firstly, a few specified bits are used to encode partial test cubes for enhancing the consistency between adjacent bits of the cube. This process is aimed at reducing test power. Secondly, in order to improve the encoding efficiency of test compression and even further reduce test storage, the test cubes are encoded to new test cubes by compatible block code (CBC). Finally, the new test cubes are compressed to LFSR seeds. The specified bits in test set encoded with CBC are much less than the original deterministic test set, so the number of stages in LFSR is reduced. Experimental results on ISCAS-89 benchmark show that the scheme can not only reduce test power but also increase the encoding efficiency.