Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 December 2011, Volume 48 Issue 12
Paper
A Collusion Detector Based on Fuzzy Logic in P2P Trust Model
Miao Guangsheng, Feng Dengguo, and Su Purui
2011, 48(12):  2187-2200. 
Asbtract ( 466 )   HTML ( 1)   PDF (1687KB) ( 515 )  
Related Articles | Metrics
As peer-to-peer (P2P) applications have achieved enormous success, the security of P2P network is getting more and more concerns. Particularly, the trust problem has become bottleneck because of the open and anonymous nature of P2P network. Trust model provides a security mechanism for P2P network and is proven to play an important role in improving the robustness of P2P network. However, colluding peers pose a serious threat to the security of trust model. For example, colluding peers may subvert the trust model by spreading plenty of false evaluation. Most existing trust models in P2P network are known as completely green hand in handling colluding peers. Some of them pose measures to make colluding attack difficult, but they are far from satisfying mainly because they cannot identify the colluding peers in P2P network. To address the problem, a fuzzy detector of collusion (FDC) is proposed to detect the existence of colluding peers by measuring similarity of peer's behavior. We show how to deploy FDC in distributed P2P network. The simulation results presented show that FDC is considerably effective in detecting and thwarting collusion attack and FDC can help the trust system in P2P network resist threats from colluding attackers.
An Approach of Trusted Usage Control in Distributed Environment
Hu Hao, Feng Dengguo, Qin Yu, and Yu Aimin,
2011, 48(12):  2201-2211. 
Asbtract ( 397 )   HTML ( 1)   PDF (1698KB) ( 530 )  
Related Articles | Metrics
In distributed environment, digital data can be easily distributed and various kinds of security requirements emerge after the data distribution. However, traditional access control solutions suffer from difficulties both in the access rights authorization and the usage policy enforcement, especially under the heterogeneous, distributed network environments. In this paper, a new architecture called TUC (trusted usage control) is proposed against the information security requirements under distributed environment based on usage control model and trusted computing technology. TUC is presented to achieve usage control based upon the hardware trust root TPM. In this way, confidentiality, integrity and controllability of the data are assured not only in distribution, transmission, storage but also in usage control. It is necessary to design TUC as a general access solution by binding policies to the usage-controlled digital content. So TUC isn't limited to the specific application environment. Moreover, TUC is a negotiable solution because of the key and policy negotiation in our design. In this way, both the user's and the owner's requirements are taken into consideration. The design and implementation of TUC is then detailed in this paper. Test results show that the performance of TUC is acceptable for access control in distributed environment.
FPGA Implementation and Verification of a Pseudo-Pipelined VLSI Architecture of Two Elliptic Curve Scalar Multiplications
Wang Xu, Zhang Yan, and Quan Jinguo
2011, 48(12):  2212-2218. 
Asbtract ( 401 )   HTML ( 1)   PDF (1013KB) ( 432 )  
Related Articles | Metrics
Two elliptic curve scalar multiplications with independent input are used in some important elliptic curve cryptography algorithms. In order to reduce the execution time of these algorithms, a pseudo-pipelined VLSI architecture of two elliptic curve scalar multiplications over binary finite field GF (2m) is proposed. The proposed architecture is implemented on FPGA board and verified by a systematic verification environment. A pseudo-pipelined word-serial finite field multiplier, with word size w, suitable for the two elliptic curve scalar multiplications is also developed. Implemented in hardware, this system performs two elliptic curve scalar multiplications in approximately 4[m/w](m-1) clock cycles. Compared with other architectures proposed recently, it is shown that the computation time for elliptic curve scalar multiplication is the shortest by using our proposed VLSI architecture.
A Trusted Computing-Based Security Architecture for Policy-Label Protection
Liu Ziwen, Feng Dengguo, and Yu Aimin
2011, 48(12):  2219-2226. 
Asbtract ( 434 )   HTML ( 0)   PDF (928KB) ( 585 )  
Related Articles | Metrics
Policies and labels are the most important parts in access control technique. Labels present some security properties of the subject and the object, meanwhile policies present some logical algorithms based on the security properties carried by labels. The enforcement of access control system can be mainly decided by these two factors. Nowadays most security systems can give a well protection to the policies, but almost none of them have systemic and well-defined methods to protect labels. They just believe that the operation system can do the work itself. The lack of label protection leads to a result that even the policies are secure and well-defined, malwares can still do harms to the system by tempering the labels. Then the system is unsafe in the end. An architecture mainly to protect the security labels in the system by using TPM (trusted computing module) chip is proposed. TPM chip is a kind of hardware provided by TCG (Trusted Computing Group). It can be used to build a TCB (trusted computing base) in a secure system. But the TCB here is too small to hold labels. By using some mechanisms such as encrypting file system and integrity measurement, we extend the edge of the TPM chip's control area and keep the labels into this area in order to enhance the safety of access control system. Implementation of a prototype system on the Linux OS is given and the experiments show the security and efficiency of our implementation.
Protecting Programs Based on Randomizing the Encapsulated Structure
Chen Ping, Xing Xiao, Xin Zhi, Wang Yi, Mao Bing, and Xie Li
2011, 48(12):  2227-2234. 
Asbtract ( 430 )   HTML ( 0)   PDF (1293KB) ( 624 )  
Related Articles | Metrics
Randomization method is widely applied into practice for protecting the program from attacks. There are two limitations in existing randomization techniques. One is that they are coarse-grained, as such they may fail to defend the attacks caused by maliciously fabricating the inner variable in function/struct/class. Another limitation is that existing randomization techniques ignore the fact that majority of the attacks leverage the input data to fabricate the sensitive data objects. In this paper, a fine-grained randomization method is proposed to protect programs based on an enhanced safety structure. These techniques will reorder the encapsulated structure (function, struct, class) layout. In addition, this method extracts input-related array and separates the array by inserting the guard between them. The randomization approach not only applies the randomization techniques into function/struct/class to foil a number of attacks, but also extracts the input-related array and inserts the guard to prevent them from being maliciously crafted. As an enhancement to existing randomization techniques, this method is able to protect the programs from both control-flow attacks and non-control-flow attacks. This technique has been implemented in the open source compiler GCC, and the experimental results show that the approach can effectively detect the real world attacks and has low performance overhead.
A Reputation Based Attack-Resistant Distributed Trust Management Model for P2P Networks
Hu Jianli, Zhou Bin, Wu Quanyuan, and Li Xiaohua
2011, 48(12):  2235-2241. 
Asbtract ( 479 )   HTML ( 0)   PDF (914KB) ( 863 )  
Related Articles | Metrics
An important challenge regarding peer's trust valuation in peer-to-peer (P2P) networks is how to cope with such issues as dishonest feedbacks from malicious peers, collusions and complex strategic frauds. However, these issues cannot be effectively addressed by the existing solutions. Thus, a reputation-based distributed trust management model for P2P networks, named RATrust, is proposed to quantify and evaluate the trustworthiness of each peer, and to combat these attacks from the malicious peers in P2P networks. In RATrust, metric of time zone is used to describe the time property of the transaction experience and the recommendation. Three other metrics, such as peer trust value, short trust value and long trust value based on the metric of time zone, are applied to express accurately the final trust level of peers. Moreover, trust deviation value and trust abuse value are introduced to depict the above several trust metrics. Additionally, a reputation information storage mechanism is provided and discussed in this paper. Theoretical analysis and simulation experiments demonstrate that RATrust has advantages in combating various dynamic malicious behaviors such as collusive malicious behaviors, dishonest feedbacks and strategic malicious attacks, over the current trust model, and shows more effectiveness and stronger dynamic adaptability.
BAR-BGP: Achieving High Reliability Interdomain Routing Through Backup AS-Address Advertisement and Recovery Forwarding
Hu Qiaolin, Sun Yipin, and Su Jinshu
2011, 48(12):  2242-2252. 
Asbtract ( 404 )   HTML ( 0)   PDF (2593KB) ( 429 )  
Related Articles | Metrics
Internet is increasingly becoming a critical information infrastructure where even transient failure causes undesirable damages. BGP, the de facto standard routing protocol, is well known to suffer slow and reactive convergence process to follow the dynamic changes in the network topology and policy changes such as link or node failures. Many measurements show that BGP experiences the frequent loss of connectivity and routing loop during the routing re-convergence periods, which degrades the forwarding performance of data plane and so it is hard to support mission critical application. In this paper, we present a lightweight BAR-BGP protocol for achieving fast recovery from the failures of interdomain link failure, which deals with transient failure by combining the method of backup AS advertisement and reactive re-computing feasible paths. BAR-BGP provides a feasible relay forwarding of AS node which can be used to encapsulate packets to relay AS node immediately in case of AS node experiencing transient failure through advertising backup AS with very low message and storage overhead. It uses “root cause notification” to prevent routing loops during convergence. Furthermore, BAR-BGP uses recovery forwarding mode to improve the loss of connectivity in case that backup AS addresses are not available. Through the detailed simulation on Internet-like topology, we suggest that BAR-BGP improves Internet reliability, reduces the transient failure and forwarding disruption effectively during the routing convergence periods.
A P2P Network Traffic Identification Approach Based on Machine Learning
Li Zhiyuan, and Wang Ruchuan
2011, 48(12):  2253-2260. 
Asbtract ( 741 )   HTML ( 3)   PDF (1581KB) ( 1626 )  
Related Articles | Metrics
Peer-to-peer (P2P) overlay networks are typical distributed systems in nature, which have attracted more and more attentions. At present, the P2P technology has been applied in file sharing, streaming media, instant messaging, and other fields. Besides, P2P network traffic accounts for more than 60% of Internet traffic. In order to better manage and control the P2P traffic, it is necessary to study a P2P traffic identification model in depth. Firstly, a machine learning model based on the wavelet support vector machine (ML-WSVM) is proposed to identify known and unknown P2P traffic. In the ML-WSVM model, the combination of the wavelet with the support vector machine is implemented by the wavelet basis function which satisfies the wavelet framework and the Mercer theorem instead of the existing support vector machine kernel functions. The proposed model makes full use of multi-scale features of the wavelet and the advantages of the support vector machine used in the classification. Then, the improved sequential minimization optimization (SMO) algorithm based on a loss function is proposed to solve the optimal hyperplane of the ML-WSVM model. Finally, the theoretical analysis and experimental results show that the ML-WSVM model can greatly improve the identification accuracy and identification efficiency of P2P network traffic, particularly to identify the encrypted packets.
A Method on Jointing Mobility of BS and Routing for Lifetime Optimization in Wireless Sensor Networks
Qu Jiaqing, Zhang Shu, and Guo Wenzhuo
2011, 48(12):  2261-2267. 
Asbtract ( 396 )   HTML ( 0)   PDF (894KB) ( 479 )  
Related Articles | Metrics
A method is proposed to optimize network lifetime based on mobility of base station (BS) and routing of sensors in the light of the features of wireless sensor network. Firstly, the best position of the BS is demonstrated, which can minimize the total energy consumption of all the sensors in the network. Further the influence that the different positions of the BS have upon the total energy consumption of all the sensors in the network is analyzed; and meanwhile it is proved that the less energy consumption of the sensor the longer the network lifetime. In order to diminish the complexity of BS computation so as to ensure real-time data gathering, Lagrange-Newton method is adopted and applied to simplify linear programming.When some sensors could not transmit information to the BS because their energies are exhausted, the BS will adaptively adjust the position of BS in order to decrease the total energy consumption of the sensors based on the update topology. And then the simplified linear programming is adopted and applied to max-min the lifetime of the sensor in order to enhance the efficiency of receiving information of the BS. The theoretical analysis and the simulation show that the linear programming simplified by Lagrange-Newton method could not only ensure the computation convergence fast but also largely decrease the complexity of the computation. And the above-mentioned mobility of BS scheme can, to a large extent, extend the network lifetime, so as to increase the amount of information collected by the network and enhance the energy efficiency of the sensor networks.
Performance Bottleneck Analysis and Solution of Shared Memory Operating System on a Multi-Core Platform
Yuan Qingbo, Zhao Jianbo, Chen Mingyu, and Sun Ninghui,
2011, 48(12):  2268-2276. 
Asbtract ( 852 )   HTML ( 0)   PDF (1449KB) ( 483 )  
Related Articles | Metrics
The shared memory operating system provides a lot of shared data structures protected by different locks. Contentions on these locks are relative to the number of active progresses (system call, kernel thread and interrupt handler) which depend on the number of processing elements. As more cores are integrated into a package, there are more and more processing elements available to the operating system. Therefore, contentions on the locks will decrease the performance of the whole system and even become a bottleneck. On the other hand, kernel runs on the same core with some application. Limited by the capability of hardware cache, the kernel code and data of the operating system may replace those of the application in the cache. Once the application gets opportunity to run, it will retrieve data from the slower cache or even from the memory subsystem to continue its execution. In such way, the performance of the application will be decreased greatly. This paper proves the existence of the above problems through several experiments on a 16-core AMD platform. Then, we propose a heterogeneous operating system focusing on these problems. In our opinion, the application and the operating system kernel run on different processing cores in order to decrease the possibility of contention and the pollution to the cache.
A Runtime Monitoring Web Services Interaction Behaviors Method Based on CPN
Zhu Jun, Guo Changguo, and Wu Quanyuan
2011, 48(12):  2277-2289. 
Asbtract ( 479 )   HTML ( 0)   PDF (3139KB) ( 396 )  
Related Articles | Metrics
BPEL (business process execution language) is one of the dominant ways to specify service interactions between different Web services to implement much more complex functions. Since it is a kind of description language for Web services composition, BPEL has difficulty in dealing with behavioral properties of service compositions. Usually, well-defined interaction protocols may be violated by clients and other abnormal partnership Web services, and it leads the service composition processes to inconsistent states and exceptions. As a result, we propose to tackle the conformance problem between interactions of Web services and its description specification by using an automatically-generated runtime monitor from the BPEL description. Firstly, a formal representation model based on colored Petri net (CPN) is introduced to extract the service interaction behaviors from its description. The pattern mapping rules from BPEL description to colored Petri net model and related embedding, reduction and composition rules are also provided. Then a runtime monitor is generated, which will capture service interaction behaviors fromto the service composition processes and detect inappropriate use of the interaction protocol. Several typical service composition samples are adopted as case study. Finally, full evaluations show that this runtime monitoring mechanism costs low overhead and has good performance and efficiency.
Static Race Detection of Interrupt-Driven Programs
Huo Wei, Yu Hongtao, Feng Xiaobing, and Zhang Zhaoqing
2011, 48(12):  2290-2299. 
Asbtract ( 535 )   HTML ( 0)   PDF (1277KB) ( 600 )  
Related Articles | Metrics
Interrupt-driven programs that run directly on the microcontrollers are ubiquitous in safety-critical areas. In these programs, data races, a class of critical errors, may occur. Using static race detection tools is an important way to find such bugs. Unfortunately, the state-of-art static race detection tools which only focus on multithread codes may be not helpful. In this paper, a new static race detection tool is designed for such programs. The tool is named Draco and implemented on top of Open64 compiler. It provides a simple and easy-to-use language that is used to annotate the programs with the interrupt-related features, so it can detect programs independent of running platforms. Moreover, it embodies a flow-sensitive and context-sensitive race detection algorithm that takes into account the atomicity, the flexibility and the partial randomicity of interrupt-driven programs. Because of adopting the program analysis techniques, Draco succeeds to detect the data races of interrupt-driven programs. It is efficient and precise. Experimental results show that the detecting time of Draco increases asymptotically linearly with the growth of code size, and it only takes 3.6s to detect 17850 lines of code. Moreover, the race detection accuracy rate on the average is 2.13 times as much as that of lockset based race detection algorithm.
An Assignment Model and Algorithm for Self-Adaptive Software Based on Architecture
Chen Honglong, Li Renfa, Li Rui, and Edwin Sha
2011, 48(12):  2300-2307. 
Asbtract ( 363 )   HTML ( 0)   PDF (1035KB) ( 488 )  
Related Articles | Metrics
The research on self-adaptive evolution software is one of the new focuses in the field of software engineering, and the online evolution that makes software adapt to the changing environment according to the architecture meta information attracts most people's attention. But due to the insufficient consideration in non-function requirements of software evolution, we focus on the optimal component assignment problem in self-adaptive software which is continuously meeting system's constraints based on architecture information during runtime. In this paper, the component assignment based architecture model is firstly described in detail, and a component assignment model is set up for this problem and proved to be NP (non-deterministic polynomial). And then we propose a heuristic algorithm to find a solution for this problem. Finally, the experimental results show that compared with the algorithm of greedy and ILP, the proposed algorithm has many advantages such as multi-object balance and time performance, and compared with ILP alone, the algorithm has better time performance. So it testifies that the heuristic algorithm proposed in this paper is effective for online evolution decision.
Research Progresses on Energy-Efficient Software Optimization Techniques
Zhao Xia, Guo Yao, and Chen Xiangqun,
2011, 48(12):  2308-2316. 
Asbtract ( 561 )   HTML ( 0)   PDF (1043KB) ( 450 )  
Related Articles | Metrics
In battery-driven embedded systems and mobile devices design areas, energy consumption has become one of the most critical constraints. In order to design the embedded systems with high performance and low power, it is necessary to consider optimization techniques from both hardware and software perspectives and pursue the optimal tradeoff between performance and energy consumption. This paper presents a concept of software energy consumption and the characteristics of software-based low energy techniques, and focuses on the software-based low energy techniques during the system development stages. Software-based optimization techniques for reducing system energy consumption in development stage include instruction-level optimization, algorithm-level optimization, and software architectural-level optimization. The primal issues and recent research progresses on software energy consumption optimization are presented.As a key supporting technique for low energy software development, software energy consumption evaluation techniques are also analyzed in detail, which include how to estimate software energy consumed by the processor and by the whole system. The instruction-level energy consumption model, architectural-level energy consumption model, and the macro-based energy consumption model for processor energy consumption estimation are discussed in detail. Finally, several challenges and open issues in software energy consumption research are summarized.
Parallel Computation Techniques for Dynamic Description Logics Reasoning
Wang Zhuxiao, Hu Hong, Chen Limin, and Shi Zhongzhi
2011, 48(12):  2317-2325. 
Asbtract ( 539 )   HTML ( 0)   PDF (1856KB) ( 477 )  
Related Articles | Metrics
Scalability is an issue that needs to be considered when designing any reasoner for dealing with large and complex ontologies and large data sets. The practical usage of parallel computation techniques in reasoning is an important premise for the adoption of dynamic description logics (DDL) in a real-world setting. In this paper we describe two possible approaches for applying parallel computation techniques to DDL reasoning. The first approach is to design a logical framework of distributed dynamic description logics (D3L), which is composed of a set of stand-alone DDLs pairwise interrelated with each other via collection of bridge rules. We present a tableau-based distributed reasoning procedure for providing the capability of global reasoning in D3L and decomposing large reasoning tasks to sub-tasks that could be concurrently processed by different reasoning agents. The second approach is to parallelize the nondeterministic branches within the DDL tableau procedure. The parallel computation of nondeterministic branches also makes it possible that the reasoning task is executed simultaneously on several independent machines. Finally, we introduce a prototype inference engine and present its evaluation. The results indicate that the proposed approaches achieve promising performance results.
Real AdaBoost Algorithm for Multi-Class and Imbalanced Classification Problems
Fu Zhongliang
2011, 48(12):  2326-2333. 
Asbtract ( 1127 )   HTML ( 0)   PDF (846KB) ( 721 )  
Related Articles | Metrics
The current AdaBoost algorithms often do not consider the priori distribution among different classes. To solve this problem, by transforming the expression of training error from sign function to exponential function, a real AdaBoost algorithm for imbalanced classification problem is proposed to minimize the training error rate, and its error estimation is also given. By the same way, the real AdaBoost algorithm for two-class classification problem could be explained and proved successfully by a new mechanism different from the current explanation of the real AdaBoost algorithm. And it could be extended to multi-class classification problem with similar algorithm flow and formulas to the AdaBoost algorithm for two-class classification problem, which is proved to be consistent with the Bayes optimization deduction method. It is proved that by using the proposed real AdaBoost algorithm for multi-class classification problem, the training error rate decreases while the number of training classifiers increases. Theoretical analysis and experimental results on UCI dataset show the effectiveness of the proposed real AdaBoost algorithm for imbalanced classification problem. Imbalanced classification problems are often transformed to balanced classification problems by adjusting the weights of training samples in real AdaBoost algorithm, but when the priori distribution is very imbalanced, the proposed real AdaBoost algorithm for imbalanced classification problem is more effective than the existing real AdaBoost algorithms.
An Algorithm for Calculating Minimal Unsatisfiability-Preserving Subsets of Ontology in DL-Lite
Zhou Liping, Huang Houkuan, Qi Guilin, Qu Youli, and Ji Qiu
2011, 48(12):  2334-2342. 
Asbtract ( 678 )   HTML ( 0)   PDF (883KB) ( 461 )  
Related Articles | Metrics
Inconsistencies occur frequently in ontology lifecycle, such as ontology construction, ontology evolution and ontology merging. Handling inconsistencies, especially, handling logical inconsistency in ontologies is increasingly recognized as an important research topic. Finding all the MUPS (minimal unsatisfiability-preserving sub-TBox) of an ontology for an unsatisfiable concept or role is an important problem because it can provide valuable information for many different tasks of inconsistency handling, such as diagnosing ontology, debugging ontologies and revising ontologies. Many approaches have been proposed to deal with this issue, however, the main drawback of these algorithms is their high computational complexity. One of the main sources of high complexity is the intractability of the underlying description logics (DLs) which may hinder practical applications to very large real life ontologies. In this paper, we focus on DL-Lite, an important tractable description logic which can keep all reasoning tasks tractable and is tailored specifically to deal with large amounts of data. After analyzing some features of unsatisfiable concepts or unsatisfiable roles in DL-Lite, we present an algorithm for computing all MUPS of an ontology for an unsatisfiable concept or role in a DL-Lite ontology. We also present a comparison of our algorithm with another representative algorithm. The results indicate that the proposed algorithm is effecient for real-life DL-Lite ontologies.
SHP-VI: A Shortest Hamiltonian Path-Based POMDP Value Iteration Algorithm
Feng Qi, Zhou Xuezhong, Huang Houkuan, and Zhang Xiaoping
2011, 48(12):  2343-2351. 
Asbtract ( 625 )   HTML ( 0)   PDF (1783KB) ( 425 )  
Related Articles | Metrics
Trial-based value iteration is a class of efficient algorithms to solve partially observable Markov decision process (POMDP), among which FSVI is one of the fastest algorithms. But the overhead of computing MDP value function by FSVI is not negligible for large-scale POMDP problems. In this paper, we propose a new value iteration method based on the shortest Hamiltonian path (shortest Hamiltonian path-based value iteration, SHP-VI). This method explores an optimal belief trajectory using the shortest Hamiltonian path resulting from ant colony optimization, and updates value function over the encountered belief states in reversed order. Compared with FSVI, the experimental results show that SHP-VI accelerates the computation of belief trajectory greatly in trial-based algorithms.
A Hierarchical Reinforcement Learning Method Based on Heuristic Reward Function
Liu Quan, Yan Qicui, Fu Yuchen, Hu Daojing, and Gong Shengrong
2011, 48(12):  2352-2358. 
Asbtract ( 561 )   HTML ( 1)   PDF (847KB) ( 637 )  
Related Articles | Metrics
Reinforcement learning is about controlling an autonomous agent in an unknown enviroment—often called the state space. The agent has no prior knowledge about the environment and can only obtain some knowledge by acting in the environment. Reinforcement learning, and Q-learning particularly, encounters a major problem. Learning the Q-function in tablular form may be infeasible because the amount of memory needed to store the table is excessive, and the Q-function converges only after each state being visited a lot of times. So “curse of dimensionality” is inevitably produced by large state spaces. A hierarchical reinforcement learning method based on heuristic reward function is proposed to solve the problem of “curse of dimensionality”, which make the states space grow exponentially by the number of features and slow down the convergence speed. The method can reduce state spaces greatly and quicken the speed of the study. Actions are chosen with favorable purpose and efficiency so as to optimize the reward function and quicken the convergence speed. The Tetris game is applied in the method. Analysis of algorithms and the experiment result show that the method can partly solve the “curse of dimensionality” and quicken the convergence speed prominently.
Human Pose Tracking Based on Partitioned Sampling Particle Filter and Multiple Cues Fusion
Liu Chenguang, Liu Jiafeng, Huang Jianhua, and Tang Xianglong
2011, 48(12):  2359-2368. 
Asbtract ( 495 )   HTML ( 0)   PDF (3130KB) ( 431 )  
Related Articles | Metrics
In this paper, we develop a method for tracking markless human pose in monocular video sequences. The number of required particles will grow exponentially when particle filter is applied to high dimensional tracking problems such as tracking human body poses, and particle filter with partitioned sampling is adopted to deal with this problem. We design a 2D human body model with constraints, and put forward a new adaptive way for fusing color, edge and motion cues together to construct the weighing function of particles. When calculating the likelihood function for each particle, we adaptively choose different templates and features according to the occlusion relationship among correlated body limbs. Thus, the proposed algorithm is capable of dealing with complicated occlusions among body limbs. In addition, we introduce a simplified belief propagation (BP) method to propagate the weights of limb observations to the corresponding particles along the edges of the body model, which can make a set of particles carry multiple constraints. Then we test the method on three video sequences which contain heavy background occlusion, complex human motion and selfocclusion, and the experimental results show that our method is effective and robust.
A Method of Stream Cube Computing Based on Interesting View Subset
Hou Dongfeng, Zhang Weiming, Liu Qingbao, and Deng Su
2011, 48(12):  2369-2378. 
Asbtract ( 300 )   HTML ( 0)   PDF (1434KB) ( 441 )  
Related Articles | Metrics
Stream cube computing is the important foundation of data stream multidimensional analysis. But the features of data stream (dynamic, infinity, bursty, etc) and complexity of multidimensional data structure, are confronted with great challenges, such as storage space, updating efficiency, adaptability, and so on. In many applications, users often focus on only a portion of views. A computing method based on interesting view subset is proposed in this paper. Interesting view subset and interesting path can be obtained by the information of historical queries. And if the efficiency of answering queries decreases, it should be updated with the lapse of time. The Stream-Tree structure is defined for maintaining the cells of interesting view subset and drilling paths in memory. In the running phase, the cells of Stream-Tree are continuously updated with new tuple arriving, and the old cells are deleted periodically according to the constraints of multi-level time windows. The sparse cells of Stream-Tree will not be divided into finer ones, only the high level aggregations are preserved. Experiments and analysis results indicate that the method is efficient in maintaining the stream cube cells of current time window in finite memory, and can answer the queries of users quickly.
Fast Algorithm of Nearest Neighbor Query for Line Segments of Spatial Database
Liu Runtao and Hao Zhongxiao,
2011, 48(12):  2379-2384. 
Asbtract ( 561 )   HTML ( 1)   PDF (909KB) ( 635 )  
Related Articles | Metrics
The definitions of the orders between line segments are given according to their MBRs. Based on the orders, an index structure—SI-tree for line segments is proposed with the aim of improving the efficiency for nearest neighbor query. In the structure, it is set that the children nodes of each middle node are arranged in some order according to their geometric locations so that the positions of data can be determined quickly when nearest neighbor query is taken on the structure. It is a brand new way to process this kind of problems. Three pruning rules for NN query are created, by which a lot of unnecessary computations can be reduced to achieve effective screening and filtering of data when corresponding query is processed so that the speed of query is quicken. In the pruning rules, the main computing used to make decision to prune a node is judgment, which is simpler than that used in other algorithms of the same kind of problems. A new nearest query algorithm for line segment data set and the proofs of the algorithms'correctness and termination are presented. The experiments show that the efficiency of query with the new algorithms presented in this paper is greatly improved, compared with that of the current algorithms used to solve the same problems when nearest neighbor query is processed.
Fusion of Morphological Features for Mongolian Part of Speech Based on Maximum Entropy Model
Zhang Guanhong, S.Loglo, and Odbal
2011, 48(12):  2385-2390. 
Asbtract ( 480 )   HTML ( 0)   PDF (701KB) ( 408 )  
Related Articles | Metrics
Part of speech tagging is one of the basic research for natural language processing fields, which plays an important role on the syntactic analysis, semantic analysis and machine translation, etc. Maximum entropy model is an outstanding statistical model for its good integration of various constraints and it has been favored in the part of speech tagging research. An approach combining linguistic morphological features for Mongolian part of speech tagging based on maximum entropy model is proposed in this paper. Mongolian has great and long history. Nonetheless, there is less research about Mongolian language processing. Mongolian is a typical agglutinative language that is characterized by rich morphology, with a high level of ambiguity. Firstly, based on the analysis of Mongolian scripts, the context feature and internal feature templates are defined and extracted from the training corpus. Then, various morphological features of words are integrated in the maximum entropy model and the IIS algorithm is employed to calculate the parameters of maximum entropy model. Experimental results on the close and open testing set prepared for Mongolian POS tagging task show that the integration of morphological features of the maximum entropy model outperforms the HMM model and can be fitful for Mongolian scripts.
Test Data Compression and Decompression Using Symmetry-Variable Codes
Liang Huaguo, Jiang Cuiyun, and Luo Qiang
2011, 48(12):  2391-2399. 
Asbtract ( 342 )   HTML ( 0)   PDF (1199KB) ( 445 )  
Related Articles | Metrics
With the rapid development of very large scale integrated circuit (VLSI) manufacturing technologies, more and more transistors can be packed into a single chip. Due to integrating various intellectual-property (IP) cores into a chip, the testing of VLSI is facing enormous challenges. Test data compression techniques based on compression codes are used to reduce ATE memory and test application time and to tackle ATE bandwidth limitation. A new test data compression technique based on symmetry-variable code (SVC) is presented in this paper, which provides significant reduction in test data volume and reduces test cost. Traditional variable-to-variable run length coding techniques based on encoding runs of 0's and runs of 1's. However, there exist numerous don't care bits in the original test set generated by ATPG tool, so the previous methods didn't make use of the characteristics of the test set effectively. As proposed method calculates four types of runs presented symmetrically, it can reduce the length of code words of the runs respectively, and increase the compression ratio. Experimental results for ISCAS89 benchmark and theoretical analysis show that SVC code can provide higher test data compression efficiency than previous codes, and have better adaptability to various test sets. The decoder for SVC is simple and easy to achieve.
Adaptive Buffer Management for Leakage Power Optimization in NoC Routers
Qi Shubo, Li Jinwen, Yue Daheng, Zhao Tianlei, and Zhang Minxuan
2011, 48(12):  2400-2409. 
Asbtract ( 459 )   HTML ( 0)   PDF (3654KB) ( 344 )  
Related Articles | Metrics
Network on chip (NoC) is becoming a promising design solution for interconnection between processor cores and cache banks in CMP (chip multi-processors) as the number cores on a chip increase. Interconnect network is the main power consumption component in CMP. Input buffer is the largest leakage power consumer in DVOQR (dynamic virtual output queues router), and it consumes about 64.9% of the total router leakage power. The run time power gating is one of the attractive methods to reduce the leakage power of routers. The fraction of input buffers can be turned on/off to reduce the leakage power of buffers in adaptive buffer management strategy proposed in this paper, according to the traffic in network. The wakeup latency will lead to the average network latency increase. The look-ahead wakeup technology can hide the wakeup latency and decrease the negative effect. The average network latency is not affected by the wakeup latency in the two-entry-buffer-never-turned-off strategy under low offered traffic rate. Simulation results display that the reduction of leakage power consumed by buffers is up to 46% when offered traffic rate is 0.7 and that the increase of average network latency is as much as 3.8% in two-entry-buffer-never-turned-off strategy when the offered traffic rate is less than 0.4.