Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 November 2013, Volume 50 Issue 11
Regularization Path Algorithm of SVM via Positive Definite Matrix
Shizhong1, Wang Mei1,2, and Zhao Zhihui1,3
2013, 50(11):  2253-2261. 
Asbtract ( 804 )   HTML ( 2)   PDF (1518KB) ( 768 )  
Related Articles | Metrics
The regularization path algorithm is an efficient method for numerical solution to the support vector machine (SVM) classification problem, which can fit the entire path of SVM solutions for every value of the regularization parameter, with essentially the same computational cost as fitting one SVM model. Existing SVM regularization path algorithms can neither deal with the datasets having duplicate data points, nearly duplicate points, or points that are linearly dependent efficiently, nor have efficient numerical solution. To address these issues, an improved regularization path algorithm via positive definite matrix positive definite SVM path (PDSVMP) is proposed in this paper, which provides the accurate path of SVM solutions. The coefficient matrix of the system of iteration equations is transformed into a positive definite matrix, then the Lagrange multiplier increment vector is computed by Cholesky decomposition, and the increment of regularization parameter is derived according to the changes of the active set, which is used to compute the regularization parameter on each inflection point. Such treatment is able to guarantee the theoretical correctness and numerical stability, and reduce the computational complexity. Experimental results on instance dataset and benchmark datasets show that the PDSVMP algorithm can effectively and efficiently handle datasets having duplicate data points, nearly duplicate points, or points that are linearly dependent.
Paper
A Co-Training Method Based on Teaching-Learning Model
Hu Juhua, Jiang Yuan, and Zhou Zhihua
2013, 50(11):  2262-2268. 
Asbtract ( 715 )   HTML ( 0)   PDF (2620KB) ( 644 )  
Related Articles | Metrics
In many real tasks, there are usually abundant unlabeled data but only a few labeled data, and therefore, semi-supervised learning has attracted significant attention in the past few years. Disagreement-based semi-supervised learning approaches are a kind of state-of-the-art paradigm of semi-supervised learning, where multiple classifiers are generated to label unlabeled instances for each other. Co-training is the first and seminal work in this category. However, during the labeling process, most current co-training style approaches consider only the confidence of the predictor but not any helpfulness for the learner. In this paper, inspired by the real-world teaching-learning system, we propose a teaching-learning model named “TaLe” for co-training, within which the predictor is considered as a teacher who is teaching while the other is the student who is learning. Based on this model, a new variant of co-training algorithm named CoSnT is presented to consider both the confidence of the teacher and the need of the student. Intuitively, the convergence efficiency of co-training can be improved. Experiments on both multi-view and single-view data sets validate the efficiency and even outperformance of CoSnT over both standard co-training algorithm CoTrain that considers only teacher's confidence and CoS algorithm that considers only student's need.
An Adaptive Large Margin Nearest Neighbor Classification Algorithm
Yang Liu, Yu Jian, and Jing Liping
2013, 50(11):  2269-2277. 
Asbtract ( 674 )   HTML ( 1)   PDF (1447KB) ( 692 )  
Related Articles | Metrics
Although kNN has been successfully applied to pattern recognition in various areas, there is a big gap to get good performance because of the parameter k. The existing kNN-type methods have to fix k for all testing examples, which is not appropriate since the data density depends on the real applications. In order to deal with this drawback, an adaptive large margin nearest neighbor classification algorithm (ALMNN) is proposed in this paper to avoid predefining the value of k for all data points. The new method adaptively selects an optimal k for each testing example by solving an optimization problem. Finally, ALMNN assigns a proper label for the testing point based on the loss function. A series of experimental results on real world data sets (including UCI benchmark data sets, image data sets and text data sets) show that the new algorithm outperforms the existing methods. Meanwhile, ALMNN has ability to make kNN insensitive to the choice of k and the random selection of training data sets.
An Intelligent Optimization Algorithm Based on Hybrid of GA and PSO
Ma Chao, Deng Chao, Xiong Yao, and Wu Jun
2013, 50(11):  2278-2286. 
Asbtract ( 758 )   HTML ( 4)   PDF (2058KB) ( 988 )  
Related Articles | Metrics
Particle swarm optimization(PSO)is simple in theory, quick in convergence, but likely to be "premature" at the initial stage. Genetic algorithm (GA) has strong global search ability but the convergence accuracy is low. Considering both the advantages and disadvantages, the structure and the critical parameters are analyzed in this paper, genetic operators and the crossing-search methods are applied to PSO algorithm to avoid falling into locally optimal solution. In this process, inertial weight and mutation methods are improved to balance the global and the local search ability. At the same time, some swarms are mutated if the swarm population have evolved to an enough small space. And then, the novel algorithm could get higher convergence accuracy and executive capability to solve non-linear and multi-extremum in the application of the engineering field. According to the results of comparisons with other algorithms through varieties of test functions, the hybrid algorithm combining PSO and GA shows great advantages in solution accuracy, search efficiency and the ability to process different functions, and meets the engineering needs.
Incremental Learning Towards Scene Features in Independent Subspace
Xie Zhao, Ling Ran, and Wu Kewei
2013, 50(11):  2287-2294. 
Asbtract ( 767 )   HTML ( 0)   PDF (3021KB) ( 622 )  
Related Articles | Metrics
Scene classification is not an easy task owing to the variability, ambiguity, and the wide range of illumination and scale conditions the scenes may apply. Since feature extraction and scene representation play important roles in classification tasks, this paper presents an approach for unsupervised feature learning based on independent subspace analysis. The proposed method could automatically learn structural feature bases organized in a grouped fashion from randomly sampled natural image patches in independent subspaces. Optimization process of feature bases is implemented under an incremental learning framework to cope with the learning difficulty with large or dynamic samples. Patch-based image descriptors are computed over regularly divided grids using nonlinear combination coefficients of the learned feature bases. These descriptors are then taken into the spatial pyramid matching model, which incorporates spatial layout information and global geometric correspondence for recognizing scene categories, to build hierarchical scene representations. Experiment reveals how the related parameters influence objective optimization process and the final classification performance. Compared with several typical models in classification task on OT scene dataset, the proposed method could form low-dimensional but efficient image patch descriptors and achieve high classification accuracy with stability.
A Soft-Thresholding Coordinate Descent Algorithm for Solving Truncated Hinge Loss
Zhu Yelei, Wang Yujun, Luo Qiang, and Tao Qing
2013, 50(11):  2295-2303. 
Asbtract ( 1208 )   HTML ( 3)  
Related Articles | Metrics
Substantially lessening the number of support vectors (SV) can improve robustness to outliers and deliver more accurate classifiers, and reduce the training and predicting time of SVM. Among various sparse algorithms, the truncated Hinge loss method can reduce the number of support vectors effectively. However, the formulated optimization problem is non-convex which can not be solved by conventional convex optimization method. Therefore, some researchers change the non-convex problem into a series of convex problems with concave-convex procedure (CCCP) method, not only increasing the additional amount of calculation but also falling into a local optimal solution. In order to circumvent these drawbacks, we propose a CCCP-based soft-thresholding(ST) coordinate descent (CD) algorithm (STCD-CCCP). We solve CCCP sub-stage convex problem with coordinate descent algorithm to improve computational efficiency. Moreover, we use soft-thresholding projection technique in dual SVM problem to reduce bigger number of support vectors as a result of the algorithm setting the coefficient of the classifier to zero. Meanwhile, soft-thresholding projection technique can eliminate the adverse effect of the local optimal solution and improve classification accuracy by selecting appropriate regularization parameter. Extensive experiments on simulation database, UCI database and large-scale real database have verified that the proposed algorithm is correct and effective.
A Traversability Detection Method Dealing with Shaded Terrains
Gao Hua, Zhao Chunxia, and Zhang Haofeng
2013, 50(11):  2304-2314. 
Asbtract ( 736 )   HTML ( 1)   PDF (4246KB) ( 580 )  
Related Articles | Metrics
Traversability detection is one of the basic methods of vision navigation for autonomous robot. Shaded terrains are ever-present in real road and urban environment, however, the current research work related to the solutions of shaded terrains is very limited. A traversability detection method dealing with shaded terrains is proposed. Firstly, utilizing SLIC superpixel algorithm, the scene image is divided into irregular image patches (superpixels) in which the pixels are homogenous and at the same illumination level. Secondly, a novel positioning method for feature extraction window is proposed to decide the location of feature extraction window which contains as much pixels which are homogenous and at the same illumination level as possible. Then, the local binary pattern (LBP) extraction method is exploited to describe the patches. Finally, One-Class SVM (one-class support vector machine) algorithm is used to run training and classification. Results show that the proposed method performs well not only in those areas where the illumination level of its neighborhood pixels are almost at the same illumination level, but also in the intersection of unshaded area and shaded area. The proposed algorithm can even achieve good results in those regions of mottled shades, and eliminates the disconnectedness of shade boundaries.
A Support Vector Machine Learning Method Based on Granule Shift Parameter
Guo Husheng and Wang Wenjian
2013, 50(11):  2315-2324. 
Asbtract ( 643 )   HTML ( 3)   PDF (2910KB) ( 522 )  
Related Articles | Metrics
For practical application problems, data size and distribution density are always imbalanced. Because the probabilities of samples falling into various regions are different due to the influence of data size or density distribution, the hyperplane obtained by traditional support vector machine (SVM) based on maximum margin maybe not optimal. Combined with granular computing (GrC) theory, an improved granular support vector machine (GSVM) model based on granule shift parameter, namely S_GSVM, is presented to solve the imbalanced data classification problems. For S_GSVM model, the original data will be firstly mapped into a high-dimensional feature space by Mercer kernel, and then the mapped data will be granulated in this space. Two granule factors, support and disperse, are defined to measure the influence of sample distributions on the performance of SVM. Then, the shift parameter of each granule is computed by support and disperse. Based on these shift parameters, a new convex quadratic optimization problem is constructed and solved. Fully considering the influence of data distribution on the generalization performance, the proposed S_GSVM model can improve the obtained hyperplane which is based on maximum margin. Experiment results on benchmark datasets and database of interacting proteins demonstrate the effectiveness and efficiency of the proposed S_GSVM model.
Heuristic Optimization Algorithms of Multi-Carpooling Problem Based on Two-Stage Clustering
Shao Zengzhen, , Wang Hongguo, Liu Hong, Song Chaochao, Meng Chunhua, and Yu Hongling
2013, 50(11):  2325-2335. 
Asbtract ( 647 )   HTML ( 1)   PDF (1563KB) ( 691 )  
Related Articles | Metrics
In logistics engineering, transportation system and some other domains, the effect of carpooling is enormously significant. Excellent carpooling strategy may not only economize logistics costs, reduce the traffic jam, but be especially favorable for reducing noise and improving the environment. Thus, a two-stage clustering heuristic strategy is introduced to solve deterministic multi-carpooling problem. In the first stage of the strategy, the conception of matching degree is proposed to guide to assign service requirements to specific vehicle, hence the original problem can be split into several single-vehicle problems. And in the second stage of the strategy, based on priori clustering idea, the number of insertion operations in one single-vehicle matching progress is reduced greatly so as to improve the efficiency of the algorithm. To improve the success rate of matching and reduce total costs, the assignment schema of vehicles is not fixed but adjustable. Heuristic emigration and immigration operators are used in a new distribution when some requirements fail to be assigned or probably lead to long arcs in the last distribution. To verify the validity and practicability of the method, some actual samples are generated based on real map information. Experimental results show the algorithm may not only improve ride success rate greatly, reduce total vehicle costs, but also demonstrate strong practicality.
Detecting Spammers with a Bidirectional Vote Algorithm Based on Statistical Features in Microblogs
Ding Zhaoyun, Zhou Bin, Jia Yan, and Wang Xiang
2013, 50(11):  2336-2348. 
Asbtract ( 691 )   HTML ( 3)   PDF (2362KB) ( 614 )  
Related Articles | Metrics
The existing work mainly focuses on spammers detection in microblogs based on explicit features, such as the interval of tweets, the ratio of mentions in tweets, the ratio of URLs in tweets, and so on. In this paper, the DirTriangleC algorithm which counts local triangles is developed in order to detect the implicit spammers, based on the following directed network. Moreover, the AttriBiVote algorithm, which classifies users by the bidirectional propagation of the trust and statistical features of neighbors' users, is put forward. Experiments are conducted on a real dataset from Twitter containing about 0.26 million users and 10 million tweets, and experimental results show that the method in this paper is more effective than other methods of statistical features. About 83.7% of dead accounts are discovered by the DirTriangleC algorithm, and the number of potential spammers by the DirTriangleC algorithm is about treble others' by explicit features. Moreover, the number of spammers by the AttriBiVote algorithm is more than that of approximation spammers by statistical features. And the precision of our method is higher than that of the methods by the interval of tweets, the ratio of mentions in tweets, and the ratio of URLs in tweets. Finally, the time cost of our method is analyzed.
An Active Collective Classification Method Combing Feature Selection and Link Filtering
Li Lina, Ouyang Jihong, Liu Dayou, and Gao Wenjie
2013, 50(11):  2349-2357. 
Asbtract ( 687 )   HTML ( 6)   PDF (1526KB) ( 636 )  
Related Articles | Metrics
As the rapid development of information technology represented by Internet, network data exist widely in real world. The classification in network data has become an important research topic in network data mining. Collective classification exploits the dependent relationships between nodes to classify related nodes simultaneously, and obtains higher classification accuracy. It has attracted wide attention from researchers and has been applied in a variety of domains, such as hyperlinked document classification, protein interaction and gene expression data classification, social network analysis and so on. We present an active collective classification method that combines feature selection and link filtering to perform classification. The algorithm first chooses important attributes based on minimum redundancy-maximum relevance feature selection method and constructs implicit links, and then filters original links to obtain explicit links, and finally integrates explicit and implicit links to perform classification. We compare our method with several typical traditional classification methods and several typical collective classification methods, and the results show that our method obtains higher accuracy, especially for sparsely labeled network, its advantage is more obvious.
Analysis and Design of Distance-Bounding Protocols for RFID
Xin Wei, Sun Huiping, and Chen Zhong,
2013, 50(11):  2358-2366. 
Asbtract ( 670 )   HTML ( 0)   PDF (1252KB) ( 525 )  
Related Articles | Metrics
Relay attacks pose a serious threat to the security of radio frequency identification systems. The adversary manipulates the communication by only relaying the verbatim messages between a reader and a tag in order to increase the communication distance between them, which breaks the implicit assumption that a tag is actually within a very short distance of a reader. The main countermeasure against relay attacks is the use of distance bounding protocols measuring the round-trip time between the reader and the tag. In 2005, Hancke and Kuhn proposed the first distance-bounding protocol dedicated to RFID system named HK. From then on, a number of relative schemes have been proposed subsequently in literature. In this paper, we design a distance-bounding protocol suited for computation limited RFID tags against relay attacks. We firstly review the existing RFID distance-bounding protocols. Weaknesses and advantages in these protocols are examined. In addition, we propose an attack model for designing and analyzing RFID distance-bounding protocols. Finally, we propose a novel distance bounding protocol based on HK protocol named HKM. It mixes the predefined challenge and the random challenge, and takes advantage of the wasting memory of HK protocol. Compared with the existing distance bounding protocols, HKM has good performance in both memory consuming and resistance to relay attacks.
Study on Detection of Covert Channel in Flume System
Cao Hui, Xiong Shengchao, Zhang Huanguo, and Yan Fei,
2013, 50(11):  2367-2374. 
Asbtract ( 479 )   HTML ( 0)   PDF (1108KB) ( 458 )  
Related Articles | Metrics
Flume system can not only provide security protection for processes in different security level transmit information, but also use explicit label mechanism for solving the problem of covert channel caused by the timeout when processes transmit information. And this problem cannot be figured out by other security systems based on DIFC that use implicit label mechanism. But the mechanism of label allocation system may also cause information leakage by a special covert channel when processes transmit information in Flume system. In this paper, a covert channel detection model (CCDM) is introduced by analyzing the reason of information leakage in Flume system. The problem of covert channel searching is abstracted as the problem of directed graph linking by CCDM. And two algorithms that can auto-search covert channel in Flume system are presented based on CCDM and the idea of backtracking algorithm. The results of experiment show that CCDM and the proposed algorithms not only can effectively detect covert channel in Flume system, but also provide the shortest path for processes to transmit information. Thus, the results of experiment can provide some guidance for improving system security.
A Method of Intrusion Detection Based on Semi-Supervised GHSOM
Yang Shilai, Yang Yahui, Shen Qingni, and Huang Haizhen
2013, 50(11):  2375-2382. 
Asbtract ( 984 )   HTML ( 0)   PDF (867KB) ( 698 )  
Related Articles | Metrics
Network intrusion detection technology based on artificial neural network is an important research direction in intrusion detection area. This paper proposes a semi-supervised GHSOM(growing hierarchical self-organizing maps) neural network algorithm, in which the clustering process of large amount of unlabeled data is conducted by small amount of labeled data. On the one hand, the idea of semi-supervised cop-kmeans algorithm is introduced into the unsupervised GHSOM algorithm, and the problem on returning no result is solved in the semi-supervised GHSOM algorithm. On the other hand, the concept of neural entropy is proposed and used as the judgment condition of the neural network growth to improve precision of division of subnets of the neural network. Besides, the labeled data are also used to determine the intrusion type of nerve cells automatically. The network intrusion detection experiment results based on KDD Cup 1999 data set and the data set collected in LAN both show that the total detection rate of the network intrusion detection system through employing semi-supervised GHSOM algorithm is higher than the network detection rate of the intrusion detection system through employing unsupervised GHSOM algorithm.
A Parallel Architecture for FPGA Based Hyperelliptic Curve Cryptoprocessor
Fang Yuejian, Shen Qingni, and Wu Zhonghai,
2013, 50(11):  2383-2388. 
Asbtract ( 598 )   HTML ( 2)   PDF (1283KB) ( 602 )  
Related Articles | Metrics
Hyperelliptic curve is an extension of elliptic curve cryptography. Shorter key lengths of hyperellitic curve cryptosystems (HECC) can be used to achieve same level of security comparing to RSA and elliptic curve cryptosystem (ECC). A parallel architecture for field programmable gate array (FPGA) based hyperelliptic curve cryptoprocessor is designed in this paper. The processor is composed of parallel finite field (FF) cores, and each core consists of a control unit, a register file and an ALU. Through sharing mechanism of register files, the independent cores can collaborate to fulfill complicated computations. Each ALU can execute customized instruction A(B+C)+D, and the instruction can be flexibly configured in the instruction generation and execution process. In our architecture, since every ALU is coupled with a control unit and a ROM, the ALUs are independent of each other. Any ALU can be started at any cycle, so multiple instructions can run on ALUs at the same time. The results of ALUs can be shared among the register files, so multiple ALUs can cooperate to finish complicated computations. A four stage pipeline is used to increase performance. The architecture designed can sufficiently support parallel processing and much higher speed up has been gained with the experiment results.
A High-Capability Scattered IP Watermarking Algorithm in FPGA Design
Xu Jianbo, Long Jing, and Peng Li
2013, 50(11):  2389-2396. 
Asbtract ( 472 )   HTML ( 2)   PDF (2150KB) ( 542 )  
Related Articles | Metrics
In order to increase watermark amount and watermark security, this paper has analyzed existing FPGA (field programmable gate array) based IP (intellectual property) watermarking techniques and presented a high-capability scattered IP watermarking algorithm in FPGA design. First of all, the algorithm adopts watermark preprocess mechanism for data compression. The mechanism uses compression function for constructing specific linear combination expressions with the encrypted information. The mechanism is not brought to the original design any additional hardware overhead. Secondly, the specific linear combination expressions are then transformed into watermark positions and bit information for being embedded while the watermark is embedded. The bit information are scattered into FPGA circuit by the additional constraint methods. The purpose of this approach is to make it very difficult for illegal user to find the location of the watermark embedding. Finally, through the experimental tests and performance analysis, the algorithm can not only use less embedded bit information amount to mark more watermark information, but also greatly reduce the impact on the system performance because of the embedded watermark information. In addition, through the experimental comparison with existing IP watermarking algorithm in FPGA design, the results show that the proposed algorithm relative to other methods, has a larger capacity of watermark information, a smaller resource overhead, and a better safety performance.
Dynamic Trustworthiness Evaluation Model of Software Based on Checkpoint's Classification Attributes
Li Zhen, Tian Junfeng, and Yang Xiaohui
2013, 50(11):  2397-2405. 
Asbtract ( 563 )   HTML ( 0)   PDF (1524KB) ( 496 )  
Related Articles | Metrics
The trusted computer can only ensure the static security at present. When the system begins to run, the trustworthiness of terminal computer mainly depends on the trustworthiness of software behavior running on it. Therefore, the dynamic trustworthiness of software during running has become a critical problem to solve the trustworthiness of computer system. According to the dynamic trustworthiness of software during running, checkpoint's scene-level attributes based on interval data are introduced and a dynamic trustworthiness evaluation model of software based on checkpoint's classification attributes is presented based on the traditional software behavior model. The model evaluates the trustworthiness of software behavior based on the trace of expected behavior. It uses checkpoint attributes' classification strategy which simplifies the trustworthiness evaluation of checkpoints based on threshold, and combines checkpoint attributes' subjective classification weighting with scene-level attributes' objective weighting. The model evaluates the trustworthiness of scene-level attributes based on interval data by constructing trusted model of scene-level attributes. The theoretical analysis shows that the classification attributes decision making reduces the number of comparisons with the threshold, and the experimental results verify the effectiveness of trustworthiness evaluation of scene-level attributes based on interval data and the high attack detection capability of the dynamic trustworthiness evaluation model.
An Approach to Runtime Memory Leak Detection Oriented to Xen Virtual Computing Environment
Xiao Ruliang, Jiang Jun, Hu Yao, Han Jia, Ni Youcong, Du Xin, and Cai Shengzhen
2013, 50(11):  2406-2417. 
Asbtract ( 548 )   HTML ( 1)   PDF (3108KB) ( 567 )  
Related Articles | Metrics
The research on system performance and stability of virtual computing environment for related technology and application is of important theoretical and practical significance. Memory leaks, which takes place in a long-time-not-shutdown system, may have serious consequences in the practical application. Memory leak detection running in virtual computing environment is a challenging problem. To solve this problem, the phenomenon of memory leaks is classified. Based on the analysis of memory management technology using Xen hypervisor, an approach to runtime memory leak detection is proposed oriented to Xen virtual computing environment, which is also named VMLD (virtualization memory leak detection), and its design of overall framework are built and implemented. Corresponding detection rules are also put forward. VMLD has realized several modules such as maintainer (Xen internal buffer), controller, interceptor, and monitor by using defined six hyper calls and the modification of the virtual machine monitor. The experimental results show that VMLD method can detect the runtime memory leaks effectively, and has better performance.
A Reconstruction Method of Type Abstraction in Binary Code
Ma Jinxin, Li Zhoujun, Hu Chaojian, Zhang Junxian, and Guo Tao
2013, 50(11):  2418-2428. 
Asbtract ( 669 )   HTML ( 3)   PDF (1604KB) ( 652 )  
Related Articles | Metrics
Reconstructing type information in binary code plays an important role in reverse engineering, malicious code detecting and vulnerabilities analysis. Type reconstruction is always considered to be one of the most difficult problems because type information is eliminated during the compile procedure and it is hard to understand the low level abstraction of binary code. Currently, most of tools are not able to reconstruct type precisely enough yet. In this paper, we present a conservative method of type construction and introduce a simple intermediate language. Based on the intermediate language, the register abstract syntax trees are constructed and used to resolve the ambiguity problem of base address pointer, which could effectively collect the basic type and structure type constraint information. We also present the method of identification of loop structure and loop count variable in binary code and it could effectively collect the array type constraint information. Type constraint is generated as per type information and resolved to reconstruct the final type. We have evaluated 15 tools of CoreUtils and it turned out that our method could reconstruct data types effectively. It could reconstruct structure type data 5 times more than IDA Pro. Manual analysis of the restored type proves that it could reconstruct types accurately.
An Idle Virtual CPU Scheduling Algorithm on Xen Virtual Machines
Wang Kai, and Hou Zifeng,
2013, 50(11):  2429-2435. 
Asbtract ( 897 )   HTML ( 1)   PDF (1532KB) ( 563 )  
Related Articles | Metrics
In the environment of Xen virtual machines, Credit algorithm is non-preemptible. When virtual CPU is idle, it can not inform Xen of its idle state, which leads to its keeping the usage of physical CPU. Currently, the existing methods to process idle virtual CPU only paid attention to the optimization of guest OS, and neglected that of service OS. That is to say, it has three problems to optimize, such as the waste of the time of idle virtual CPU, no consideration of idle state of service OS and the inaccuracy of the judgement of virtual machine's idle state, which causes great much needless loss. According to the problems, an idle virtual CPU scheduling algorithm based on Credit algorithm is given. When the message of virtual CPU's idle state is notified by reception module of idle state of virtual CPU, virtual CPU's credit of virtual machine is modified dynamically, in order to allocate the reminder of idle time to other virtual machine. At the same time, the weight of virtual machine is also modified according to the virtual CPUs' average idle rate of the virtual machine, which realizes the dynamic integration between virtual machine's scheduling method and feedback control. The experiment results show that the idle virtual CPU scheduling algorithm improves greatly the performance of virtual machines.
Performance Improvement for Irregular Data Intensive Hot-Slice with Low Computing Workload
Zheng Ninghan, Gu Zhimin, and Sun Xianhe
2013, 50(11):  2436-2443. 
Asbtract ( 585 )   HTML ( 0)   PDF (1334KB) ( 501 )  
Related Articles | Metrics
With the rising and development of cloud computing, more and more irregular data intensive applications based on chip multi-core processors (CMP) appear, their application performances are badly affected by data cache misses. Traditional methods based on helper thread running in idle cores try to push irregular data into the shared last layer cache (LLC) in advance, which will be soon used by a computing core. If the helper thread runs faster than the main thread, the helper thread can push hot-data into LLC before the main thread uses them, thus the performance of hot slice may be improved. But for the hot-slice with low computing workload, it is impossible to build a helper thread running faster than the main thread by traditional method. This paper is aimed at the performance optimization of irregular data intensive hot-slice with low computing workload. First, the formalization description of the prefetch QoS of the helper thread is given, and then a new performance optimization method is proposed. The new method is implemented in real commercial processors without involving additional hardware modifications. Measurement results show that the performance of science computing benchmark em3d, mst and SPEC CPU2006 mcf gets the increases of 1.97%, 31.63% and 1.10% respectively compared with the traditional method in Intel Core 2 Duo Processor 6550.
Two Longest Common Substring Algorithms Based on Bi-Directional Comparison
Wang Kaiyun, Kong Siqi, Fu Yunsheng, Pan Zeyou, Ma Weidong, and Zhao Qiang
2013, 50(11):  2444-2454. 
Asbtract ( 578 )   HTML ( 0)   PDF (2483KB) ( 522 )  
Related Articles | Metrics
Finding the longest common substring (LCSstr) for two given strings is an important problem in string analysis. It can be used in many applications such as approximate string matching, biological sequences analysis, plagiarism detection and computer virus signature detection. There are two algorithms to solve the longest common substring problem: Dynamic programming (LCSstrDP) and the suffix array (LCSstrSA). LCSstrDP solves the LCSstr problem by calculating the longest common suffix of the prefix (comparing from right to left). Its code is simple, but of low efficientcy LCSstrSA calculates the longest common prefix of the suffix (comparing from left to right). LCSstrSA’s time complexity is linear, though it is more complex. In this paper, we propose two LCSstr algorithms based on bi-directional comparison, named LCSstrSeL and LCSstrSCeL. LCSstrSeL skips the existing length of LCSstr with simple code and significantly improved efficiency, compared with LCSstrDP. On the basis of LCSstrSeL, LCSstrSCeL adds several mechanisms, such as character and continuous same value segment spanning. The test results show that not only the memory overhead of the algorithm is lower than that of LCSstrSA, but also the average efficiency is higher for small and medium strings. In some case the computation efficiency can be sublinear.
Group Obstacle Nearest Neighbor Query in Spatial Database
Yang Zexue, and Hao Zhongxiao,
2013, 50(11):  2455-2462. 
Asbtract ( 669 )   HTML ( 0)   PDF (1691KB) ( 532 )  
Related Articles | Metrics
Existing research work on group nearest neighbor query is based on Euclidean distance, which can not solve the nearest neighbor query problem based on obstructed distance in the case of presence of obstacles.A new variant of group nearest neighbor query in obstacle environments is proposed, that is, group obstacle nearest neighbor query. It retrieves a data point from dataset with the smallest sum of obstructed distances to all query points.According to the different positional relationships between a point in the dataset and MBR of query points, the pruning regions for MBR of query points to the point in the dataset in various circumstances are constructed.By using the pruning regions, the obstacles of no effect on obstructed distance calculation are cut.And the obstructed distance calculation algorithm between the point in the dataset and query points is given.The pruning rules of group obstacle nearest neighbor query are defined, which contain the Euclidean distance pruning rules and obstructed distance pruning rules.The GONN algorithm for group obstacle nearest neighbor query is put forward, which is based on the BF traversal of R-tree for dataset and combines the pruning rules and the obstructed distance calculation algorithm.The proofs of the algorithm's correctness and termination are brought forward, and time complexity analysis is given.Experimental results show that the algorithm has high efficiency.
Parallel Accelerator Design for High-Throughput DNA Sequence Alignment with Hash-Index
Wang Wendi, Tang Wen, Duan Bo, Zhang Chunming, Zhang Peiheng, and Sun Ninghui,
2013, 50(11):  2463-2471. 
Asbtract ( 868 )   HTML ( 2)   PDF (2661KB) ( 845 )  
Related Articles | Metrics
In recent years, due to the rapid development of high-throughput next generation sequencing (NGS) technologies, the sequencing cost and time have been greatly reduced. However, both the explosion of the generated NGS data and the massively parallel computation pose great challenges to the capability of existing computers. We take an open-source re-sequencing algorithm based on hash-index, called PerM, as an example to investigate the optimizations for accelerating NGS with commercial multi-core CPUs as well as with customized parallel architectures. Firstly, we optimize the original algorithm by reordering the bucket accessing sequences so that data locality in shared cache is improved. Secondly, to exclude the empty hash buckets, we propose a hash-index compression algorithm, which coincides with the sequential access nature of the optimized algorithm. The experiments on a 64-cores SMP (Intel Xeon X7550) show that the optimized algorithm reduces LLC miss ratio to about 10% of the original algorithm, therefore the overall performance can be improved by 4 to 11 times. Furthermore, a parallel accelerator architecture is designed and evaluated on our customized FPGA accelerator card with a Xilinx LX330 FPGA resident. As a prototype, a systolic array of 100 PEs is built, which operates at 175MHz. The performance of the proposed parallel accelerator architecture is justified by the reported speedup of 30 to 65 times over an 8-cores CPU.
Review on Liquid Acquisition and Modeling Techniques
Zou Ling, Qi Yue, and Zhao Qinping
2013, 50(11):  2472-2480. 
Asbtract ( 577 )   HTML ( 1)   PDF (2846KB) ( 503 )  
Related Articles | Metrics
The simulation of natural phenomena attracts a spurt of research attention and interest in computer graphics and virtual reality domain. The modeling and simulation for liquid, which is one of the most common nature phenomena, has extraordinary value in many applications like game, medical use, movies, etc. As traditional liquid modeling method, physics-based approaches suffer from the high computational complexity. Following the development of visual acquisition equipments like high-resolution digital cameras, liquid modeling method based on acquisition data has become another active research topic in virtual reality community. With this method, more realistic phenomena can be generated with technologies in computer graphics and computer vision. In addition, the visual features and rich details are preserved while the computational complexity can be significantly reduced. However, the time-varying modeling for liquid dynamics through acquisition methods is still faced with many challenges. Current research and development progress of liquid visual acquisition and simulation in recent years are reviewed and introduced. The main methods adopted in this research area are analyzed and summarized. Finally, challenges and future development trend in this domain will be proposed.