Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 December 2015, Volume 52 Issue 12
A DRM Scheme Supporting Attribute Revocation and Outsourced Decryption in the Cloud Computing Environment
Zhang Weiwei, Feng Gui, Liu Jianyi, Yang Yixian
2015, 52(12):  2659-2668.  doi:10.7544/issn1000-1239.2015.20150749
Asbtract ( 925 )   HTML ( 2)   PDF (1638KB) ( 768 )  
Related Articles | Metrics
DRM (digital rights management) is an important part of the construction of informatization in China. Unfortunately, high investment costs and poor user experience are two issues of DRM for further promotion in practice. Previous studies for DRM focused merely on storage service of cloud computing, lacking of concerning on the advantage of its computing service. This paper proposes a DRM scheme supporting attribute revocation and outsourced decryption in the cloud computing environment. In order to protect the users privacy in the DRM system, we propose that users purchase licenses through anonymous tags. In addition, in order to give full play to the advantages of cloud computing in computing and be flexible, fine-grained revocation of the user’s attributes, a supporting revocation of the outsourcing decryption CP-ABE scheme is proposed. Compared with the existing digital rights management schemes based on cloud computing, the proposed scheme protects content security and user privacy. Besides, it supports flexible access control mechanism and fine-grained user’s attributes revocation, and it is more applicable in the content protection in the cloud computing.
A Real-Time Task Availability Improving Fault-Tolerant Scheduling Algorithm on Heterogeneous Platform
Sun Jian, Zhang Xingjun, Dong Xiaoshe
2015, 52(12):  2669-2683.  doi:10.7544/issn1000-1239.2015.20150721
Asbtract ( 866 )   HTML ( 1)   PDF (3172KB) ( 785 )  
Related Articles | Metrics
With the rapid development of Internet plus, cloud computing, big data and other fields, heterogeneous system has become an important platform for the deployment of scientific computing, industrial control, cloud storage and other key applications. Because of the heterogeneity of processor performance and software/hardware structure, heterogeneous platform shows better scalability and high cost-performance ratio. However, with the scale of platform becoming larger and the system application becoming more complex, system schedulability becomes worse, and availability decreases. To solve this problem, we propose a fault-tolerant scheduling algorithm aiming to improve availability for real-time tasks on heterogeneous platform, namely AIFSAL. The algorithm uses processor utilization and availability cost to design real-time task scheduling model, and combines availability cost and primary/backup copy (PB) method together for fault-tolerant. During task scheduling, no matter task’s primary or backup copy, processors with lower availability cost is chosen preferentially in order to improve system availability, meanwhile tasks’ backup copies are executed as the type of passive backup copy preferentially in order to achieve fault-tolerant and ensure the schedulability of task allocation. Simulation experiments and comparison analysis with other task scheduling algorithms, including availability approached task scheduling algorithm (AATSAL), task partition based fault tolerant rate-monotonic (TPFTRM) and the earliest completion algorithm (MinMin), verify the effectiveness of the proposed algorithm on availability improving and schedulability assuring. Hence, the system comprehensive cost is reduced and comprehensive performance is improved significantly.
A Packet Loss Differentiation Algorithm for 4G-LTE Network
Du Haipeng, Zheng Qinghua, Zhang Weizhan, Yan Jifeng
2015, 52(12):  2684-2694.  doi:10.7544/issn1000-1239.2015.20150726
Asbtract ( 1510 )   HTML ( 3)   PDF (2952KB) ( 689 )  
Related Articles | Metrics
On wireless networks, packet losses are usually caused by congestion or wireless fading. Loss differentiation is a fundamental problem of how to identify the cause of packet losses based on the transmission characteristics of wireless network. When moving on to LTE network, this remains an issue worthy of study due to the possible changes in packet loss patterns. In light of this, this paper proposes a loss differentiation algorism LDA-LTE for LTE network. We firstly conduct a detailed measurement study of the transmission characteristics of LTE network in configurable congestion and wireless channel fading scenarios. The test bed is based on an emulated LTE network in the laboratory by adopting a LTE base station emulator. Then, we find that the packet loss patterns that previous works used as the principal basis for loss differentiation do not hold when moving on to LTE network. This makes these works fail to work on LTE network. Finally, we design a loss differentiation algorism LDA-LTE, which collectively uses observed loss patterns to infer the cause of the packet losses on the LTE network in the hybrid scenario where congestion loss and wireless loss occur simultaneously. Evaluations on the test bed show that the accuracy of loss differentiation with LDA-LTE is significantly improved compared with previous work.
Preemption Threshold Scheduling for Energy Harvesting Based Cyber-Physical Systems
Ge Yongqi, Dong Yunwei, Gu Bin
2015, 52(12):  2695-2706.  doi:10.7544/issn1000-1239.2015.20150745
Asbtract ( 882 )   HTML ( 2)   PDF (3222KB) ( 585 )  
Related Articles | Metrics
In energy harvesting based cyber-physical systems (EHCPS), energy management architecture is different from traditional battery powered embedded systems. The task scheduling of EHCPS should take account of the output power of energy harvesting unit, the energy level of battery and the energy consumption of computing tasks. A real-time task may meet time constraint only if its energy constraint is satisfied. Against this background, the schedulability analysis of conventional preemption threshold scheduling doesn’t consider the tasks energy consumption, thus the preemption threshold assignment algorithm is not suitable for EHCPS. An energy related preemption threshold scheduling (ERPT) for EHCPS is proposed in this paper. It integrates tasks energy consumption and energy supply ability into schedulability analysis, and the preemption threshold assignment algorithms are also presented. ERPT presents a solution for applying preemption threshold scheduling in EHCPS. The proposed scheduling strategy is validated compared with other two existing classical algorithms. The experimental results show that the proposed scheduling can reduce the tasks preemption effectively.
A Virtual Currency-Based Incentive-Aware Low Delay Routing for DTNs
Jiang Qingfeng, Men Chaoguang, Li Xiang, He Zhongzheng
2015, 52(12):  2707-2724.  doi:10.7544/issn1000-1239.2015.20140566
Asbtract ( 803 )   HTML ( 0)   PDF (7521KB) ( 604 )  
Related Articles | Metrics
Due to the limited resources such as bandwidth, buffer, energy, and so on, most delay tolerant networks (DTNs) nodes are selfish and do not forward messages for other nodes to save their precious resources, which seriously degrades the routing performance. To stimulate the DTNs selfish nodes to cooperatively forward messages and reduce the message delivery delay, this paper proposes a virtual currency-based incentive-aware low delay routing algorithm, called VCILDR. A delay-based currency payment and allocation strategy is established to encourage selfish nodes to forward messages for other nodes in VCILDR. In this way, the direct beneficial messages are forwarded to the nodes with lower delivery delay and mutually beneficial messages are exchanged at the same time. A bargaining game model of alternating offers is established to determine the exchanged mutually beneficial messages. In addition, a greedy algorithm for solving the model’s subgame perfect equilibrium is proposed in this paper. Extensive simulations are carried out on real-world dataset to verify the performance of this incentive-aware low delay routing. The experimental results show that the proposed routing can effectively stimulate DTNs selfish nodes to cooperatively forward messages for others, reduce the message delivery delay and improve the message delivery success ratio at the same time.
A Kind of New Anti-Collision Approach Based on AID
Zhang Degan, Song Xiaodong, Zheng Ke, Liang Yanpin
2015, 52(12):  2725-2735.  doi:10.7544/issn1000-1239.2015.20140560
Asbtract ( 741 )   HTML ( 0)   PDF (2372KB) ( 483 )  
Related Articles | Metrics
The RFID technology is one of the key technologies of the Internet of things which need to be studied. Based on our researches of the RFID technologies, a kind of new anti-collision approach based on AID (associated ID) has been put forward in this paper. As we know, the realization mechanism of anti-collision approach is between the tags with the memory and the memory-less tags. We propose a method to build the association between tags with memory, so that tags in a certain trigger condition can take the initiative to send their own ID.Tags use modulated binary pulse to send data to the reader. This approach is different from that with the memory-less tags. Because there exists association relationship between father-tag and son-tag, when it is applied to the multi-tree searching method that we have proposed, the advantage of this method is that a single communication can identify multiple tags at one time, which has greatly improved the identification efficiency. When the approach is applied to the uncertainty ALOHA algorithm, the reader can decide the location of the empty slots based on the position of the binary pulse, so the working reader can avoid the efficiency decreasing problem caused by reading empty slots. Due to no limit of ID length, experimental results of our simulations show that this approach can greatly decrease the probability of collision and improve the identification efficiency of the system during many kinds of its applications.
EasiARS: Dynamic WiFi Link Access and Adaptive Networking Based on Multi-Mode Communication in Wireless Sensor Networks
Peng Kang, Zhao Ze, Chen Haiming, Li Dong, Shi Hailong, Cui Li
2015, 52(12):  2736-2749.  doi:10.7544/issn1000-1239.2015.20140667
Asbtract ( 746 )   HTML ( 2)   PDF (4247KB) ( 469 )  
Related Articles | Metrics
In some wireless sensor networks(WSNs) applications, gateway nodes use WiFi to access upper layer network. However, due to the instability of wireless link quality, it has a lot of work to find a stable and reliable gateway disposition by using fixed gateways in traditional WSNs structure. In this paper, we propose a novel method for dynamic WiFi link access and adaptive networking called EasiARS, which is applicable to the scenarios where WiFi link quality is instable and all devices including gateways and nodes are energy constrained. EasiARS is based on WiFi and ZigBee multi-mode communication. It contains a low-overhead real-time WiFi coverage detection method, a roles switching method and a clustering and neworking method in WiFi coverage area. EasiARS makes it feasible to rapidly deploy WSNs systems in dynamic WiFi link environment. While system is working, nodes adjust their roles and transmission strategy according to the WiFi link quality, which ensures WSNs upload data stably and balance the energy consumption in WiFi coverage area. Experiments verify that, EasiARS achieves the purpose that nodes can switch to suitable roles and transmit stably in dynamic WiFi link environment. Compared with single fixed gateway method, EasiARS leads to a reduction up to 90% in packet loss rate under different WiFi link quality, and an improvement of 67% in network lifetime compared with fixed multi-gateway method, meanwhile reduces the burden of WiFi link quality testing and evaluation before deployment.
A Privacy Protection Algorithm Based on Network Voronoi Graph over Road Networks
Pan Xiao, Wu Lei, Hu Zhaojun
2015, 52(12):  2750-2763.  doi:10.7544/issn1000-1239.2015.20140602
Asbtract ( 996 )   HTML ( 0)   PDF (3412KB) ( 665 )  
Related Articles | Metrics
With advances in wireless communication and mobile positioning technologies, location-based services (LBSs) have seen wide-spread adoption. The academia and industry have pay attention to location privacy preserving in LBSs for about ten years. Most existing location cloaking algorithms over road networks are based on DFS-based or BFS-based graph traversal algorithms. However, the main drawback of these approaches is the repetitive global road network traversal, which results in the low efficiency and high communication cost. In order to resolve the problem, we propose a network Voronoi-based location anonymization algorithm NVLA on road networks. Under the help of network Voronoi diagram (NVD), the road network is divided into several network Voronoi cells offline. Then, we reduce the problem of anonymization over the global road network into finding cloaking sets in local network Voronoi cells (NVC). Next, according to the number of road segments and the number of users in a cell, network Voronoi cells are classified to unsafe, safe-medium and safe-large cells. Finally, according to the features of the different network Voronoi cells, different cloaking algorithms are proposed. We design and conduct a series of experiments, and the experimental results validate the efficiency and effectiveness of the proposed algorithm. The average cloaking time is only 0.4 ms and the cloaking success rate is about 100%. On the other hand, only 0.01% of the query cost is sacrificed.
A Heuristic Dyna Optimizing Algorithm Using Approximate Model Representation
Zhong Shan, Liu Quan, Fu Qiming, Zhang Zongzhang, Zhu Fei, Gong Shengrong
2015, 52(12):  2764-2775.  doi:10.7544/issn1000-1239.2015.20148160
Asbtract ( 1132 )   HTML ( 0)   PDF (2448KB) ( 520 )  
Related Articles | Metrics
In allusion to the problems of reinforcement learning with Dyna-framework, such as slow convergence and inappropriate representation of the environment model, delayed learning of the changed environment and so on, this paper proposes a novel heuristic Dyna optimization algorithm based on approximate model—HDyna-AMR, which approximates Q value function via linear function, and solves the optimal value function by using gradient descent method. HDyna-AMR can be divided into two phases, such as the learning phase and the planning phase. In the former one, the algorithm approximately models the environment by interacting with the environment and records the feature appearing frequency, while in the latter one, the approximated environment model can be used to do the planning with some extra rewards according to the feature appearing frequency. Additionally, the paper proves the convergence of the proposed algorithm theoretically. Experimentally, we apply HDyna-AMR to the extended Boyan Chain problem and Mountain Car problem, and the results show that HDyna-AMR can get the approximately optimal policy in both discrete and continuous state space. Furthermore, compared with Dyna-LAPS (Dyna-style planning with linear approximation and prioritized sweeping) and Sarsa(λ), HDyna-AMR outperforms Dyna-LAPS and Sarsa(λ) in terms of convergence rate, and the robustness to the changed environment.
Bare-Bones Differential Evolution Algorithm Based on Trigonometry
Peng Hu, Wu Zhijian, Zhou Xinyu, Deng Changshou
2015, 52(12):  2776-2788.  doi:10.7544/issn1000-1239.2015.20140230
Asbtract ( 921 )   HTML ( 1)   PDF (4307KB) ( 746 )  
Related Articles | Metrics
DE algorithm is one of the most popular and powerful evolutionary algorithms for global optimization problems. However, the performance of DE is greatly influenced by the selected suitable mutation strategy and parameter settings, but this choosing task is a challenge work and time-consuming. In order to solve this defect, a novel bare-bones differential evolution algorithm based on trigonometry, called tBBDE, is proposed in this paper. The convergence performance of the algorithm is then analyzed in terms of the stochastic functional theory. In the paper the proposed algorithm adopts the triangle Gaussian mutation strategy as well as ternary crossover and adaptive crossover probability strategy for individual update. When the algorithm is trapped into premature convergence and stagnation, it will execute population disturbance. In this case, the proposed algorithm not only inherits the advantages of bare-bones algorithm but also retains the characteristics of DE evolution based on the differential information of randomly selected individuals. The experimental studies have been conducted on 26 benchmark functions including unimodal, multimodal, shifted and high-dimensional test functions, while the results have verified the effectiveness and reliability. Besides, comparied with the other bare-bones algorithms and the state-of-the-art, DE variants has proved that the algorithm is a type of new competitive algorithm.
Frequent Sequential Pattern Mining under Differential Privacy
Lu Guoqing, Zhang Xiaojian, Ding Liping, Li Yanfeng, Liao Xin
2015, 52(12):  2789-2801.  doi:10.7544/issn1000-1239.2015.20140516
Asbtract ( 1175 )   HTML ( 8)   PDF (2691KB) ( 777 )  
Related Articles | Metrics
Frequent sequential pattern mining is an exploratory problem in the field of data mining. However, directly releasing the discovered frequent patterns and the corresponding true supports may reveal the individuals privacy. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection by adding noise. Due to the inherent sequentiality and high-dimensionality in sequences, it is challenging to apply differential privacy to frequent sequential pattern mining. To address those problems, this paper presents a differentially private method called Diff-FSPM that uses an interactive way to find frequent patterns. To reduce the impact of dimensionality, Diff-FSPM first employs the exponential mechanism to obtain the optimal sequential length for truncating each sequence in the original dataset. After that, Diff-FSPM relies on a prefix-tree to compress all the frequent patterns, and then utilizes the Laplace mechanism to perturb the true support of the frequent patterns. To efficiently allocate the privacy budget, Diff-FSPM uses the closet frequent pattern and Markov assumption to guide the mining process. Finally, Diff-FSPM adopts a post-processing technique with consistency constraint to boost the accuracy of the returned noisy support counts. We theoretically prove that Diff-FSPM satisfies ε-differential privacy, and the experimental results show that it outperforms the existing methods in terms of utility.
A New Framework of Action Recognition: 3DHOGTCC and 3DHOOFG
Tong Ming, Wang Fan, Wang Shuo, Ji Chenglong
2015, 52(12):  2802-2812.  doi:10.7544/issn1000-1239.2015.20140553
Asbtract ( 829 )   HTML ( 2)   PDF (4791KB) ( 682 )  
Related Articles | Metrics
Video action recognition has a high academic research value and wide market application prospect in the field of video semantic analysis. In order to achieve an accurate description of video action, two motion descriptors based on dense trajectories are proposed in this paper. Firstly, to capture the local motion information of the action, dense sampling in motion object region is done by constraining and clustering of optical flow. Secondly, the corners of the motion target have been selected as the feature points which are tracked to obtain dense trajectories. Finally, 3D histograms of oriented gradients in trajectory centered cube (3DHOGTCC) descriptor and 3D histograms of oriented optical flow gradients (3DHOOFG) descriptor are constructed separately in the video cube centered on the trajectories to describe the local area of motion accurately. To make full use of the scene information that action occurs, a framework combined with motion descriptors and static descriptors is proposed in this paper, which makes the dynamic characteristics and static background features fusion and supplement mutually and also achieves better recognition accuracy even in complex scenes such as camera movement, etc. This paper adopts the leave one out cross validation on the datasets of Weizmann and UCF-Sports, and adopts the four-fold cross validation on the datasets of KTH and Youtube, and the experiments show the effectiveness of the new framework.
Constrained Multi-Objective Optimization Algorithm Based on Dual Populations
Bi Xiaojun, Zhang Lei, Xiao Jing
2015, 52(12):  2813-2823.  doi:10.7544/issn1000-1239.2015.20148025
Asbtract ( 936 )   HTML ( 0)   PDF (4110KB) ( 874 )  
Related Articles | Metrics
In order to improve the distribution and convergence of constrained multi-objective optimization algorithms, this paper proposes a constrained multi-objective optimization algorithm based on dual populations. The improved Harmonic distance eliminates the effect of the individuals whose Pareto grade is weak and distance is far, consequently the distribution of population can be enhanced. Also it reduces the amount of calculation effectively and improves the efficiency of the suggested algorithm. Then, the new update method of the infeasible solution set is closely linked with the feasible solution set, and these infeasible individuals both the objective function value and the constraint violation are excellent can be retained, so the better feasible individuals will be produced in the following evolution process, and the diversity of the populations and the search efficiency are improved simultaneously. Finally, the new variation strategy makes full use of the information of the best feasible individuals and the good infeasible individuals, which ensures the good ability of exploration and exploitation and balances the global and local search. The proposed algorithm is compared with 3 state-of-the-art constrained multi-objective optimization algorithms on CTP test problems. Simulation results show that the presented algorithm has certain advantages than other algorithms because it can ensure good convergence while it has uniform distribution.
A Random Walk Based Iterative Weighted Algorithm for Sub-Graph Query
Zhang Xiaochi, Yu Hua, Gong Xiujun
2015, 52(12):  2824-2833.  doi:10.7544/issn1000-1239.2015.20140801
Asbtract ( 1042 )   HTML ( 1)   PDF (4229KB) ( 664 )  
Related Articles | Metrics
Nowadays, technological development in measuring molecular interactions has led to an increasing number of large-scale biological molecular networks. Identifying conserved and stable functional modules from such networks helps not only to disclose the function of inner components, but also to understand their relationships systematically in complex systems. As one of classical NP-complete problems, the sub-graph query problem is gaining research efforts in analyzing such behaviors from the fields of social networks, biological networks, and so on. Calculating node similarities and reducing the sizes of target graphs are two common means for improving query precisions and reducing computational complexity in the study of sub-graph algorithms. For the problem of querying sub-graphs among complex protein interaction networks, this paper presents a sub-graph query algorithm based on semi-Markov random walk model. A comprehensive similarity measurement based on semi-Markov random walk model is designed to integrate the similarities of nodes, structures and their neighbors. Meanwhile, an iterative procedure is applied to reduce the size of targeted graph by removing nodes in a cluster with lower similarities by calculating the global correspondence score. The experimental results on multiple real protein query networks demonstrate that the proposed algorithm improves its performance in both query precisions and computational complexity.
A Method to Set Decay Factor Based on Gaussian Function
Han Meng, Wang Zhihai, Yuan Jidong
2015, 52(12):  2834-2843.  doi:10.7544/issn1000-1239.2015.20131883
Asbtract ( 1969 )   HTML ( 7)   PDF (3639KB) ( 632 )  
Related Articles | Metrics
Data stream is a continuous and time changed sequence of data elements, and contained information is different over time. In some data stream applications, the information embedded in the data arriving in the new recent time period is of particular value. Therefore, time decay model (TDM) is used for mining frequent patterns on data stream. Existing methods to design time decay factor have the characteristics of randomness, so the result set is unsteady. Or, the methods just consider 100% recall or 100% precision of the algorithm, while they ignore the corresponding high precision or recall. In order to balance high recall and high precision of the algorithm and ensure the stability of the result set, a novel way to set average decay factor is designed. To further increase the weights of the latest transactions and reduce the weights of historical transactions, another novel way to design decay factor based on Gaussian function is proposed. For comparing the pros and cons of different time factors, four time decay models are researched and designed. The algorithms based on these four models are designed to discover closed frequent patterns over data streams. The performance of the proposed methods to mine the frequent patterns on the high-density or low-density data streams is evaluated via experiments. Results show that using the average time decay factor balances the high recall and high precision of the algorithm. Compared with other ways, setting decay factor based on Gaussian function gets better performance than them.
Frequent Graph Mining on Multi-Core Processor
Luan Hua, Zhou Mingquan, Fu Yan
2015, 52(12):  2844-2856.  doi:10.7544/issn1000-1239.2015.20140598
Asbtract ( 824 )   HTML ( 1)   PDF (3833KB) ( 544 )  
Related Articles | Metrics
Multi-core processors have become the mainstream of modern processor architecture. Frequent graph mining is a popular problem that has practical applications in many domains. Accelerating the mining process of frequent graphs by taking full advantage of multi-core processors has research significance and practical values. A parallel mining strategy based on depth-first search (DFS) is proposed and a task pool is used to maintain the workload. Compared with the method that utilizes breadth-first search, data temporal locality performance can be improved and a large amount of memory is saved. Cache conscious node-edge arrays in which record data of a thread are arranged continuously are designed to decrease the data size to represent original graphs and cache miss ratio. False sharing that severely degrades performance is mostly eliminated. In order to reduce lock contentions, a flexible method is explored to look for work tasks and memory management queues are utilized to reduce the overhead due to frequent memory allocation and free operations. A detailed performance study and analysis is conducted on both synthetic data and real data sets. The results show that the proposed techniques can efficiently lower memory usage and cache misses and achieve a 10-fold speedup on a 12-core machine.
An Algorithm on Mining Approximate Functional Dependencies in Probabilistic Database
Miao Dongjing, Liu Xianmin, Li Jianzhong
2015, 52(12):  2857-2865.  doi:10.7544/issn1000-1239.2015.20140685
Asbtract ( 1216 )   HTML ( 1)   PDF (2051KB) ( 568 )  
Related Articles | Metrics
An approximate functional dependency (AFD) is a functional dependency almost hold, and the most existing works are only able to mine AFDs from general data. Sometimes, data is stored in probabilistic database, in order to mine AFDs from such type of data, we define the probabilistic AFD, namely (λ, δ)-AFD which is different from the previous definition. We propose a dynamic programming to compute the confidence probability of a candidate AFD and check if the confidence probability is more than the probability threshold, however, as the high time complexity of dynamic programming, we derive the lower bound based on Chernoff bound to prune candidates as much as possible. Then, under help of the anti-monotone property, we propose a mining algorithm based on lexicographical order and some pruning criterions to speed up the mining process. At last, experiments are performed on the synthetic and the real-life data sets, and the results show the effectiveness of the pruning criterions and the scalability of our mining algorithm, and we show the interesting results mined from DBLP data set.
Survey of Physics-Based Character Animation
Zhao Jianjun, Wei Yi, Xia Shihong, Wang Zhaoqi
2015, 52(12):  2866-2878.  doi:10.7544/issn1000-1239.2015.20140634
Asbtract ( 1236 )   HTML ( 8)   PDF (3107KB) ( 715 )  
Related Articles | Metrics
Character animation is one of the main research topics of computer graphics and virtual reality, and it plays an important role in many applications of digital entertainment, 3D game, manufacturing. Recently, along with the development of 3D animation game and special effects production in film making, physics-based character animation has attracted wide spread attention, and has been one of the hottest topics in the field of computer graphics for many years. As the growth of the requirement for physical reality, many new methods have emerged, which greatly enhance the animation and improve the performance. The most key issue is locomotion controller, which aims to calculate the joint torques to drive character move on. This paper focuses on character animation techniques. Firstly, the researches on character animation based on physical simulation are reviewed. According to the calculation methods of joint torques, these techniques are classified into 7 categories: spacetime constraint, constrained dynamics optimization control, low-dimensional planning, finite-state controller, data-driven technique, dynamic selection of motion examples, integration with statistics model. Their principles and characteristics are detailed, and recent works are highlighted. Secondly, with analyzing the related literatures, the advantages and disadvantages of the existing simulation methods are summarized. Finally, we point out various open research areas and possible future directions.
Keyframe Extraction Method Based on Dominating Set
Nie Xiushan, Chai Yan’e, Teng Cong
2015, 52(12):  2879-2887.  doi:10.7544/issn1000-1239.2015.20140701
Asbtract ( 865 )   HTML ( 3)   PDF (4202KB) ( 642 )  
Related Articles | Metrics
Keyframe extraction is one of the important steps in video processing, and it is popularly used in video content analysis. A keyframe extraction method is proposed in this paper for the content-based video summarization. In order to well depict the structure and relation among the frames of a video, we firstly model the video as an undirected weighted graph, where the frames of the video are taken as vertices, and the lines among vertices are taken as edges. The weights of edges are computed using the Hausdorff distances between pairs of speed-up features frame-by-frame which are local and robust features of frames. Subsequently, based on the representation of the keyframe, the process of keyframe extraction is equivalent to the selection of minimum dominating set in a graph, and integral linear programming is used to select the minimum dominating set in the graph. Finally, the keyframes are extracted according to the vertices in the obtained dominating set. We execute the proposed method on different types of videos, and evaluate the performance of the fidelity and compression ratios. Compared with the traditional methods, the proposed method is depended on video content rather than time and video shots. The experimental results show that the keyframes extracted by the proposed method have good representation and discrimination, and they also have high fidelity and compression ratios.