ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2011, Volume 48 Issue 9
A Flow-Based Differentiated Service Scheme in PMIPv6 Networks
Shen Li, Zhang Hanwen, Xu Zhijun,Zhang Yujun, Li Zhongcheng, and Pan Xianfeng
2011, 48(9):  1571-1579. 
Asbtract ( 486 )   HTML ( 2)   PDF (3009KB) ( 319 )  
Related Articles | Metrics
The proxy mobile IPv6 protocol (PMIPv6) is a network-based mobility management protocol proposed by IETF. It has been adopted by many standard organizations and will be one of the most important components of the next generation mobile network. PMIPv6 provides mobility management for IPv6 nodes without host involvement. But it only defines how mobility entities manage hosts’ intra-domain mobility and lacks QoS control. To solve this problem, we propose a flow-based differentiated service scheme in PMIPv6 networks (FD-PMIPv6) . To implement differentiated service for different flows in the PMIPv6 domain, we introduce a logical tunnel based flow separation method, as well as a binding method between flows and logical tunnels. With the proposed separation method, multiple tunnels can be established between a pair of mobility management entities and each one has a specific service type. With the binding method between flows and logical tunnels, mobility management entities could bind flows to particular logical tunnels according to their QoS requirements and network status. The simulation results based on NS2 show that, FD-PMIPv6 can provide differentiated services for different flows, thus it can meet the specific QoS requirements of different flows more efficiently compared with PMIPv6.
Distributed Computing Model and Supporting Technologies for the Dynamic Allocation of Internet Resources
Peng Yuxing, Wu Jiqing, and Shen Rui
2011, 48(9):  1580-1588. 
Asbtract ( 469 )   HTML ( 0)   PDF (1601KB) ( 440 )  
Related Articles | Metrics
Internet computing becomes more and more popular, such as peer-to-peer, grid and cloud computing. Resource allocation problem is a key problem of all kinds of Internet applications. However, the dynamics of Internet resources makes the resource allocation problem one of the challenging issues on the Internet. Aiming at the issue, this paper presents a computational model of the dynamic Internet resource allocation and some related distributed algorithms. Firstly, the dynamics of the Internet resources is analyzed and the result shows which characteristics are invariable in resource allocation process. Secondly, based on the invariable characteristics during the resource allocation process, an organization model of distributed resources, a computational model of allocating resources and the APIs for use of Internet resources are presented. Thirdly, some distributed resource allocation algorithms on system level, such as publishing resources and requesting resources, are presented to support the models. In addition, the definition of good serving peer and the good serving peer selection algorithm are given. Finally, based on the models, two Internet applications are tested. The experiment results show that the models and the algorithms are effective and that the good serving peer selection algorithm can decrease the ratio of the rejected request drastically.
A Large-scale Device Collaboration Mechanism
Rong Xiaohui, Chen Feng, Deng Pan, and Ma Shilong
2011, 48(9):  1589-1596. 
Asbtract ( 450 )   HTML ( 1)   PDF (1141KB) ( 435 )  
Related Articles | Metrics
In the field of “Internet of things”, area management and emergency rescue, the requirement of large-scale device collaboration is growing. Aiming at the large-scale and strict timing constraint of the large-scale device collaboration systems, in this paper, a two-level task model for large-scale device collaboration is presented. The task model is made up of collaboration task and device task called collaboration subtask. Based on this task model, a large-scale device collaboration mechanism described with Pi-calculus is given, which consists of task-level collaboration mechanism and subtask-level collaboration mechanism. In the task-level collaboration mechanism, aiming at the exclusivity of device resources, resource reservation is adopted to avoid the device access conflict among collaboration tasks. The subtask-level collaboration mechanism ensures the timing constraints among collaboration tasks and collaboration subtasks. It includes three parts: collaboration mechanism between task and subtask, collaboration mechanism among subtasks and collaboration mechanism based on time. Then the large-scale device collaboration mechanism’s accuracy is proven theoretically. Finally, a large-scale device collaboration prototype system is designed and implemented. The results of simulation experiments on the prototype system show that the large-scale device collaboration mechanism can satisfy the performance requirements of large-scale and strict timing constraint in the large-scale device collaboration.
Gamma Distribution of the Internet Traffic Zoomed
Zhang Guangxing, Xie Gaogang, and Zhang Dafang
2011, 48(9):  1597-1607. 
Asbtract ( 378 )   HTML ( 0)   PDF (1947KB) ( 454 )  
Related Articles | Metrics
While there exists an extensive body of prior work on Internet traffic characterization, there has been very little attempt to study the impact of aggregation on Internet traffic. The existed work focus on characterization of the Internet traffic by the single granularity. By zooming the granularity of the observation from two different points of view, we analyze in depth the properties of several real traces collected from two big sites. These traces can be divided into two groups. One is from China and the other is from Japan. By the method of non-parametric test, we find that the Gamma distribution can fit the Internet traffic well when the granularity zoomed is greater than a special value regardless of the observation types. Furthermore, we demonstrate that TCP traffic shows the same behavior with the total traffic and it also has the Gamma distribution characterization, but none of the typical distributions can fit the UDP traffic well in any granularity from any perpective. In particular, our results stay fairly consistent over time and different traces from different sites. Our finding will be helpful to the research on the traffic modelling and future Internet protocols and architectures.
An Energy Efficient Scheduling Mechanism for Real-time Services in 802.16e
Xue Kaiping, Zhu Bin, Hong Peilin, and Lu Hancheng
2011, 48(9):  1608-1615. 
Asbtract ( 352 )   HTML ( 0)   PDF (1153KB) ( 322 )  
Related Articles | Metrics
In the 802.16e system, when multiple mobile stations (MSs) with real-time services are in sleep mode, the QoS of data scheduling and the energy efficiency of sleep mode impact each other. Sleep mode will aggravate the competition of data transport of different MSs and decrease system’s QoS, while improper data scheduling policy will interrupt the sleep mode of MS and decrease the energy efficiency. To address this problem, an energy efficient scheduling mechanism is proposed. The proposed mechanism first focuses on the sleep mode’s impact on data scheduling. In order to smooth the input of BS scheduler, which is the data needed to transmit to the MSs, it tries to make the listening windows of different MSs lie dispersedly and do its best to balance the system load among OFDM frames. Then it considers the data scheduling’s impact on sleep mode. In order to increase the energy efficiency, it improves the widely used EDF mechanism to avoid scheduling MS’ data on its sleeping window by promoting the packets’ priority of the sleeping MS. The simulation result shows that the proposed mechanism can not only improve the energy efficiency but also decrease the signaling overhead while guaranteeing the QoS of the network.
Network-Coding Based Multicast Routing in VANET
Luo Juan, Xiao Yi, Lu Zhen, and Li Renfa
2011, 48(9):  1616-1622. 
Asbtract ( 441 )   HTML ( 4)   PDF (996KB) ( 544 )  
Related Articles | Metrics
Multicast can efficiently improve the utilization and scalability of wireless links. Fusion of multicast and network-coding will implement the maximum flow minimum cut value in network, which can increase the network throughput. An event-driven multicast routing algorithm using network-coding, NCMR, is proposed. This algorithm could overcome VANET’s shortcoming of frequently changed topology and short-life links. NCMR algorithm is based on local topology information and combined with location information provided by GPS systems in vehicle. In this algorithm, node determines the data transmission rate and the local network maximum flow minimum cut value, and calculates the minimum field combined with maximum distance separable code (MDS), and then formats the linearly independent characteristics of global coding matrix to guide the downstream node encoding. Aside from guaranteeing the success rate of the target node decoding, this algorithm could also reduce the information transmission size of finite field. In order to guarantee QoS of network, NCMR algorithm will force the nodes switch to opportunistic routing when the network is split. Simulation results show that the NCMR algorithm effectively reduces communication between nodes and avoids communication interference, and could achieve higher reception success rate and finally balance network load.
VoIP Capacity Analysis and Optimization for IEEE 802.11e Wireless Local Networks Using Node Differentiation
Wu Qilin, Lu Yang, Ge Lunyue, Tian Yiming
2011, 48(9):  1623-1633. 
Asbtract ( 420 )   HTML ( 0)   PDF (1763KB) ( 471 )  
Related Articles | Metrics
A new service called voice over Internet Protocol (VoIP) is becoming a promising application in wireless local network (WLAN). However, VoIP capacity is also limited because of the WLAN environment. To support VoIP service over WLAN, IEEE 802.11e EDCA adjusts three parameters (contention window (CW), arbitration inter-frame space (AIFS), transmission opportunity (TXOP)) to provide a differentiation service (DS) for application flows. However, it is not considered how to optimize those parameters to provide a DS for the access point (AP) and the terminal nodes, although the AP undergoes the higher traffic load and becomes the bottleneck for the VoIP capacity improvement. In this paper, this topic will be handled. A new analytical model is proposed for the VoIP capacity over WLAN. This analytical model, which reflects the ON-OFF model of voice flows and delay bound of voice uplink and downlink flows, can be utilized to differentiate the AP and terminal nodes using the three parameters. Based on the analytical model, two differentiation schemes, called separate differentiation (SD) and joint differentiation (JD), are analyzed in which the SD utilizes one arbitrary parameter and the JD uses two arbitrary parameters to provide differentiation service. As a result, with aid of the analytical model, the optimal values of CW, AIFS and TXOP of two differentiation schemes can be obtained for improving the VoIP capacity according to the different voice codes and different voice packetization intervals. Finally, simulation results confirm that the analytical model is accurate and effective for the optimization of VoIP capacity.
A Business Oriented Risk Assessment Model
Li Bin, Xie Feng, and Chen Zhong
2011, 48(9):  1634-642. 
Asbtract ( 1188 )   HTML ( 10)   PDF (939KB) ( 866 )  
Related Articles | Metrics
Traditional information security risk assessment emphasizes the loss of asset, but ignores the effect of the risk on business. This paper proposes a business oriented risk assessment model BoRAM. On the basis of the business security requirements, the proposed model introduces three basic security goals (i.e. confidentiality, integrity and availability) into the process of the risk assessment, and further measures the risk according to the effect on business process. In the proposed model, the asset is not only severed as a basic evaluation element as same as the role in the traditional risk assessment models, but also is served as the support of the business. The risk of the asset, the risk of the business process, and the risk of the business are analyzed hierarchically. In order to measure these risks, all the risk elements are generalized and analyzed by attribute-oriented induction (AOI) as well as cluster algorithm. Furthermore, the Markov model is also introduced into the business to describe the transfer between business processes. Finally, the model is experimented in a typical Internet-bank business. Theoretical analysis and experimental results show that the proposed model can evaluate the business risk instead of traditional asset risk on the basis of confidentiality, integrity and availability of business, which is just the goal of the business security requirements.
Synchronization in Inter-Packet Delay Based Flow Correlation Techniques
Zhang Lu, Luo Junzhou, Yang Ming, and He Gaofeng
2011, 48(9):  1643-1651. 
Asbtract ( 435 )   HTML ( 1)   PDF (2453KB) ( 354 )  
Related Articles | Metrics
As one of the most important network flow characteristics, Inter-packet delay (IPD) is used by lots of flow correlation techniques. It selects appropriate packet samples in the output flow to calculate the statistical characteristics based on IPDs, and estimates the similarity to the input flow’s characteristics using correlation algorithms. However, perturbations during flow transmission will destroy the synchronization among flows and mismatch the correlation start point and IPDs, which significantly decreases the detection rate. All types of perturbations are summarized in this paper and a new matching-set based synchronization idea is introduced, which assigns several possible mappings for each correlation point. Two synchronization algorithms based on greedy and progressive methods are proposed to improve the effect of flow correlation techniques. The experimental result shows that the proposal in this paper can effectively solve the synchronization problem in the case of flow perturbation and increase the detection rate of IPD based flow correlation techniques.
Secure Data Aggregation Algorithm Based on Reputations Set Pair Analysis in Wireless Sensor Networks
Ma Shouming, Wang Ruchuan, and Ye Ning
2011, 48(9):  1652-1658. 
Asbtract ( 356 )   HTML ( 0)   PDF (1311KB) ( 581 )  
Related Articles | Metrics
Wireless sensor networks is a kind of self-organizing networks which consist of large numbers of low-cost and low-power tiny sensor nodes that can communicate with each other to perform sensing, processing and storing sensed data cooperatively. In order to effectively reduce the amount of data transmission to cut down the overall energy consumption based on the strict energy limitations in wireless sensor networks, and simultaneously guarantee the security of sensed data aggregation, a secure data aggregation algorithm based on set pair analysis of sensor node reputations is presented. The employment of subtractive clustering method based on density function during the node clustering phase results in faster clustering speed, more reasonable cluster head distribution, and more preferable cluster size. In the phase of data transmission, the selection of next hop node is modeled as a multiple attribute decision making process. The networks data stream therefore obtains equilibrium and safety by virtue of the comprehensive evaluation of the multifold cluster head attributes (reputations, energy, etc) and the obtainment of optimal cluster head for relaying the aggregated data. Simulation results show that the proposed algorithm is superior to similar data aggregation algorithms such as LEACH algorithm and BTSR algorithm in aggregation precision, aggregation security, and cluster head energy consumption.
Detection of Code Vulnerabilities via Constraint-Based Analysis and Model Checking
Wang Lei, Chen Gui, and Jin Maozhong
2011, 48(9):  1659-1666. 
Asbtract ( 584 )   HTML ( 0)   PDF (888KB) ( 639 )  
Related Articles | Metrics
Compared with traditional program analysis, model checking is preponderant in improving the precision of vulnerabilities detection. However, it is hard to directly apply model checking to detect buffer overflow, code injection and other security vulnerabilities. To address this problem, an approach that combines the constraint-based analysis and the model checking together to detect vulnerabilities automatically is proposed in this paper. At first we trace the information of the buffer-related variables in source code by the constraint-based analysis, and instrument the code with corresponding attribute transfer and constraint assertions of the buffers before the potential vulnerable points that are related to the buffers. And then the problem of detecting vulnerabilities is converted into the problem of verifying the reachability of these constraint assertions. Model checking is used to verify the reachability of the security vulnerabilities. In addition, we introduce program slicing to reduce the code size in order to reduce the state space of model checking. CodeAuditor is the prototype implementation of our methods. With this tool, 18 previously unknown vulnerabilities in six open source applications are discovered and the observed false positive rate is at around 23%. The result of minicom’s slicing shows that the performance of detection is improved.
Improved Coupled Tent Map Lattices Model and Its Characteristics Analysis
Liu Jiandong, Yang Kai, and Yu Youming
2011, 48(9):  1667-1675. 
Asbtract ( 418 )   HTML ( 2)   PDF (2103KB) ( 406 )  
Related Articles | Metrics
An improved coupled map lattices (ICML) model consisting of tent maps is presented based on the security from the point view of cryptography. The model inherits the coupled diffusion and parallel iteration mechanism of coupled map lattices (CML). ICML system state can be traversed into ergodic state, and pseudo-random sequences with multiple-output and uniformly distributed characteristics can be generated quickly, by duple non-linear operation from the rolled-out and folded-over of local lattice’s tent map and modulo addition operation. Simulation and analysis demonstrate that the differential values distribution is the same between the sequences generated by ICML and the true random sequences with all elements having the equal appearance odds, and it is not feasible to distill and reproduce system architecture and parameter information from the sequences generated by ICML on computation. In addition, as compared with CML, which are often used in chaos-based cryptography by many other researchers, ICML model restraines effectively that short period’s phenomena can be produced from numeric chaotic system; and it has many special properties such as zero correlation in total field, uniform invariable distribution and that the maximum Lyapunov exponent is much bigger and steady. All of the properties suggest that ICML possesses the potential application in encryption.
A Self-Optimization Mechanism of System Service Performance Based on Autonomic Computing
Zheng Ruijuan, Wu Qingtao, Zhang Mingchuan, Li Guanfeng, Pu Jiexin, and Wang Huiqiang,
2011, 48(9):  1676-1684. 
Asbtract ( 424 )   HTML ( 2)   PDF (1314KB) ( 460 )  
Related Articles | Metrics
The security of computer and network is a key subject in computer field. Under the intrusion or abnormal attacks, how to supply service autonomously, without being degraded, to users is the ultimate goal of network securiy technology. People need an automated, flexible, fine-grain management method to solve the problem of security decline. Autonomic computing is regarded as a novel method to implement the security self-management of computer and network systems, which has been a frontier research hotspot with the character of subject cross in network security. Combined with the martingale difference principle, a self optimization mechanism based on autonomic computing—SOAC is proposed. According to the prior self optimizing knowledge and parameter information of inner environment, SOAC searches the convergence trend of self optimizing function and executes the dynamic self optimization, aiming at minimizing the optimization mode rate and maximizing the service performance. After that, the best optimization mode set is updated and a prediction model is constructed and renewed, which will implement the static self optimization and improve the accuracy of self optimization prediction. The two procedures interact and cooperate with each other, implementing the autonomic increase of system service performance in the changing inner environment. The simulation results validate the efficiency and superiority of SOAC.
Heterogeneous Distributed Linear Regression Privacy-Preserving Modeling
Fang Weiwei, Ren Jiang, and Xia Hongke
2011, 48(9):  1685-1692. 
Asbtract ( 526 )   HTML ( 0)   PDF (1173KB) ( 550 )  
Related Articles | Metrics
Privacy-preserving is one of the most important and challenging issues in data mining field. It can help mining tools mine rules and patterns accurately while preserving the original private information of database. Statistical regression is a common tool in data mining field, but little work has been conducted to investigate how statistical analysis could be performed when data set is distributed among a number of data owners. Due to confidentiality or other proprietary reasons, data owners are reluctant to share data with others, while they wish to perform statistical analysis cooperatively. We address the important tradeoff between privacy and global statistical analysis. In this paper, the authors propose a homomorphous public key protocol based on ring homomorphism and discrete logarithm problem, and then constructe a privacy-preserving regression model, which can obtain accurate statistical results by using the homomorphous character of homomorphous public key protocol. Theoretical analysis and experiment results prove that the protocol and model are secure and effective.
MIOS: A Scalable Multi-Instance OS for Large Scale CCNUMA System
Lu Kai, Chi Wanqing, Gao Yinghui, and Feng Hua
2011, 48(9):  1693-1703. 
Asbtract ( 538 )   HTML ( 1)   PDF (1437KB) ( 345 )  
Related Articles | Metrics
MIOS is a scalable operating system designed for large scale CCNUMA system. It introduces multi-instance kernel structure. In MIOS, each instance of OS kernel executes the same code, but runs on a node of the CCNUMA machine and manages its resources respectively. The MIOS provides a single system image running environment for all nodes of CCNUMA system, supporting process and thread task model. Aiming at the features of CCNUMA system and the requirements of scientific computing applications, the MIOS provides several optimizations, including weak shared thread model, cascaded task scheduling, adaptive communication between tasks and register-based lock. We have implemented MIOS on our Galaxy parallel computer system, a large scale CCNUMA system including 2048 processors. The evaluations on Galaxy system, including micro-benchmarks and real parallel applications, show that MIOS can provide comparable performance with a conventional OS for MPI applications. For OMP applications, the MIOS also can provide a good performance speedup on the large scale CCNUMA system with 2048 processors. The structure of MIOS can also provide experiences for designing operating system on many-core processor.
A Schema-Based Approach to GML Compression
Wei Qingting, , Guan Jihong, and Zhou Shuigeng,
2011, 48(9):  1704-1713. 
Asbtract ( 471 )   HTML ( 0)   PDF (2012KB) ( 487 )  
Related Articles | Metrics
GML, an XML-based geographic modeling language, has become a de facto encoding standard for geospatial data. Usually, GML documents are extremely verbose because of highly frequent repeating structures like tags and attribute names, which contributes to the self-describing advantage of GML data. Besides, GML documents are rich of data, having many space-consuming textual data items, including attribute values and element contents. What is worse, there often exists a great amount of high-precision spatial coordinate data in text format that occupies more storage space than in binary format. Hence it is very costly to store and transfer GML documents. An effective schema-based approach to GML compression is proposed, which compresses a GML document by first inferring a schema from the document, validating the document against the schema inferred from the document itself, and then encoding the state transition paths of the tree automaton by bits, compressing the coordinate data via the delta encoding scheme, and forwarding the inferred schema and all encodings to the general text compressors finally. Experiments on real GML documents show that the proposed compressor outperforms both typical general text compressors (gzip and PPMD), and the state-of-the-art XML compressors (including XMill, XMLPPM, XWRT), as well as the GML compressor GPress in compression ratio.
A of Feature Reuse Method at Requirement Level Based on Aspect Encapsulation
Luo Shutong, Pei Zhili, Zhang Changhai, and Jin Ying
2011, 48(9):  1714-1721. 
Asbtract ( 390 )   HTML ( 0)   PDF (948KB) ( 370 )  
Related Articles | Metrics
Identification of reusable software assets is the basis of software reusable exercise. Feature model can organize software requirements effectively in a certain domain by defining features and their relationship, which provides strong support for domain requirements reuse. Aspect-oriented system design emphasizes reducing entangles among requirements or codes produced during software development and achieving high modularity by encapsulating crosscutting concerns into aspects, which benefits maintenance and reuse. A method of aspect encapsulation of features from feature model at requirement level is proposed for the purpose of feature reuse, and it can identify the module reused from legacy systems in one domain. At first, through analyzing requirements documents of multi-legacy systems, system concerns are elicited and domain concern hierarchical structure is established. Next, a set of domain features are identified, and aspect encapsulation is done on similar features, and the feature layer model is set up. Finally a new system is developed with the assistance and reuse of feature layer model and encapsulated aspects. A case study is done by applying our method to design a new Web system from two legacy Web systems. It has been indicated that our approach is helpful for reusing multi-legacy systems in one domain.
A Weighted Algorithm of Inductive Transfer Learning Based on Maximum Entropy Model
Mei Canhua, Zhang Yuhong, Hu Xuegang, and Li Peipei
2011, 48(9):  1722-1728. 
Asbtract ( 624 )   HTML ( 3)   PDF (817KB) ( 516 )  
Related Articles | Metrics
Traditional machine learning and data mining algorithms mainly assume that the training and test data must be in the same feature space and follow the same distribution. However, in real applications, the data distributions change frequently, so those two hypotheses are hence difficult to hold. In such cases, most traditional algorithms are no longer applicable, because they usually require re-collecting and re-labeling large amounts of data, which is very expensive and time consuming. As a new framework of learning, transfer learning could effectively solve this problem by transferring the knowledge learned from one or more source domains to a target domain. This paper focuses on one of the important branches in this field, namely inductive transfer learning. Therefore, a weighted algorithm of inductive transfer learning based on maximum entropy model is proposed. It transfers the parameters of model learned from the source domain to the target domain, and meanwhile adjusts the weights of instances in the target domain to obtain the model with higher accuracy. And thus it could speed up learning process and achieve domain adaptation. The experimental results show the effectiveness of this algorithm.
Multi-Objective Evolutionary Algorithm for Principal Curve Model Based on Multifractal
Zhang Dongmei, Gong Xiaosheng, and Dai Guangming
2011, 48(9):  1729-1739. 
Asbtract ( 654 )   HTML ( 0)   PDF (1788KB) ( 504 )  
Related Articles | Metrics
Current model-based multi-objective evolutionary algorithms use linear modeling approach such as PCA and local PCA, which has deficiencies that the model fitting result is not satisfactory and is sensitive to modeling parameters. In this paper, a multi-objective evolutionary optimization algorithm based on multifractal principal curve (MFPC-MOEA) is proposed. The algorithm uses principal curve to build nonlinear modeling on the distribution of the solution set and to establish the probability model on the individual distribution of population, which can generate the individuals distributed evenly in the objective space and ensure the diversity of optimization results.The start and stop criteria for the algorithm modeling are two important aspects of modeling multi-objective algorithm. In this paper, we analyze the distribution of individuals in the solution space with multifractal spectrum, and design the start criteria of the modeling for the model of multi-objective evolutionary algorithm, which is used as initial conditions of model. Furthermore, multifractal approach is used for assessing the convergence degree of algorithm, in order to design a stop criteria of the multi-objective evolutionary optimization algorithm. Moreover, we adopt internationally recognized testing functions such as ZDT, DTLZ, etc, to conduct the comparison experiment with NSGA-II, MOEA/D, PAES, SPEA2, MFPC-MOEA and other classical multi-objective evolutionary optimization algorithms. The simulation results show that the proposed algorithm performs better on the performance indicators of HV, SPREAD, IGD and EPSILON, which indicates that through the introduction of multifractal modeling strategy and principal curve method, the quality of solution is improved in a certain extent. A new idea to solve multi-objective optimization problems (MOPs) is provided.
A Running State Analysis Model for Humanoid Robot
Wang Xianfeng, Hong Bingrong, Piao Songhao, and Zhong Qiubo
2011, 48(9):  1740-1747. 
Asbtract ( 580 )   HTML ( 0)   PDF (931KB) ( 437 )  
Related Articles | Metrics
In this paper, according to the dynamics of running humanoid robot, a probability model of running state analysis for humanoid robot is proposed based on the feedback of virtual acceleration sensor. Inertial force affects the running state of humanoid robot during the course of running. The value of acceleration can express inertial force. So we can obtain dynamic feedback from the virtual acceleration sensor built in humanoid robot to illustrate the running state of humanoid robot, and can analyse dynamic feedback from virtual acceleration sensor by using wavelet transform and fast Fourier transform. The probability model of running state analysis for humanoid robot is formulated by energy eigenvalue abstracted in freqency field. Using Mahalanobis distance as a criteria for stable running of humanoid robot, this model can express humanoid robot running state quantitatively. Simulation is conduct for humanoid robot model built with ADAMS, and the virtual acceleration sensor is built in the center of mass for humanoid robot. The experimental results show that this model is able to describe the running of humanoid robot and express the running state of humanoid robot during the course of running including start gait and stop gait, and it can help humanoid robot adjust their gaits with the change of environment to ensure their running stability.
A Reconfigurable System-on-Chip Design Methodology Based on Function-Level Programming Model
Chen Yu, Li Renfa, Zhong Jun, and Liu Tao
2011, 48(9):  1748-1758. 
Asbtract ( 318 )   HTML ( 0)   PDF (1945KB) ( 417 )  
Related Articles | Metrics
Reconfigurable system on chip (RSoC) is a promising alternative to deliver both flexibility and high computation speed at the same time, and also a technical solution which is looking forward to the future needs of embedded applications market. But its very complex design process is impeding the development of the extensive application. Given the lack of efficiency of the current programming process and resource management, this paper proposes an RSoC design methodology based on function-level programming model on account of the characteristics of the reconfigurable architecture. In the programming model, system designers use high-level language to complete functional specification by calling the co-function-library. Then the dynamic hardware/software partitioning algorithm will decide whether an invoked function should be running on hardware or software automatically. According to the partitioning result, the dynamic linker will switch functions’ execution mode in real time. And the above items facilitate an automatic design flow through specification to the system implementation. Experiments and tests have verified the feasibility and efficiency of the design method.
An Optimized Partitioning Algorithm for Complex Network Based on Social Simulations on Cluster Computing Platform
Yao Yiping and Zhang Yingxing
2011, 48(9):  1759-1767. 
Asbtract ( 444 )   HTML ( 1)   PDF (1503KB) ( 509 )  
Related Articles | Metrics
Partitioning is regarded as one of the most important issues which seriously influence the performance of the network-based social simulation on cluster computing platform. Partitioning algorithms based on computing a k-way partitioning of undirected graph is an enabling technology for parallel simulation as it could provide the effective decomposition of the computations. Unfortunately, since the scale-free network topology, which is a common characteristic in the network-based social simulations, new challenges are imposed to the graph partitioning algorithms. While background load of hosts causes unbalancing constraints associated with the vertices, currently partitioning algorithms based on k-way partitioning of undirected graph may produce poor-quality solutions and also require more running time and memories. This paper formalizes the partitioning problem statement of the network-based social simulation, and proposes a new partitioning algorithm for the power-law graphs. According to the algorithm, the hub nodes in the graph are filtered out and assigned to the partitions firstly; and then the k-way partitioning problem could be transformed into a minimum cost assignment problem which can obtain partitions with unbalancing constraints. This partitioning algorithm could find a solution with value at most three times the optimal value and can be executed in time O((k!+1)·kn). The experiments demonstrate that the algorithm is efficient.
A Lifetime-limited Causal Order Control Method in Asynchronous DVE System
Zhou Hangjun, Zhang Wei, Peng Yuxing, and Li Sikun
2011, 48(9):  1768-1780. 
Asbtract ( 499 )   HTML ( 0)   PDF (2346KB) ( 474 )  
Related Articles | Metrics
Distributed virtual environment(DVE) is a computer-generated virtual space that simulates the real world. Therefore, in a DVE, causal order consistency is required to be preserved in real time, which means the causal events must be delivered within the lifetime of the result event. However, due to the network latency, part of causal events may not arrive the receiving node in time especially in large-scale DVE, and then the causality between the arrived causal events and the result event can not be maintained within the lifetime of it. In related work, some do not consider the causality with lifetime limitation based on the presumption that all events can arrive in time, while others require accurate synchronous simulation clock and their control overhead is closely-coupled with the system scale so that causal control efficiency becomes very low in large-scale DVE. In this paper, we propose a novel lifetime-limited causal order (LCO) control method that can compare asynchronous time of different nodes, conclude the ending condition of multi-path causal order control information selection and dynamically adapt the causal control information according to network latency variation. Thus even when part of causal events can’t arrive in time, the causality among arrived events can be preserved within lifetime limitation using causal control information selected by LCO. The experiment results demonstrate that LCO can effectively preserve causal order consistency within lifetime and the overhead of the causal control information is irrelevant with system scale.
Ribs and Fans of Bézier Curves and Surfaces with Endpoints G1 Continuity
Huang Weixian and Wang Guojin
2011, 48(9):  1781-1787. 
Asbtract ( 415 )   HTML ( 0)   PDF (1508KB) ( 345 )  
Related Articles | Metrics
In order to obtain the decomposition and reconstruction of Bézier curves and surface that are more suitable to be used in Internet transmission, the ribs and fans of Bézier curves and surfaces with endpoints G1 continuity are studied in this paper, and the corresponding smooth parts and detailed parts of the curves and surfaces are derived. On the other hand, given the smooth parts and detailed parts of a Bézier curve, the reconstruction algorithm of the curve is presented. In addition, the concepts of ribs and fans are generalized to triangular Bézier surfaces. The degree n triangular Bézier surface can be decomposed into one rib of degree n-1, one fan of degree n-3 and three Bézier curves(fans) of degree n-4. Numerical examples show that the decomposition of smooth parts and detailed parts of curves and surfaces in the paper is more effective and more convenient.
Multi-Scale Image Mosaic Using Features from Edge
Cao Shixiang, Jiang Jie, Zhang Guangjun, and Yuan Yan
2011, 48(9):  1788-1793. 
Asbtract ( 539 )   HTML ( 3)   PDF (2564KB) ( 941 )  
Related Articles | Metrics
An algorithm, which turns out to be highly efficient and runs fewer time, is proposed to extract the features from edges of image, thus the multi-scale image fusion and mosaic can be carried out. We build an edge-smoothing pyramid and extract the stable features for image registration. By reusing the multi-scale representation, the registered images are fused, and the cost of mosaic is eliminated. The demo results indicate that this algorithm can greatly eliminate the false feature match, advance the precision of transformation between images to sub-pixel, and condense the computation cost of registration and consequent mosaic. Finally, an experimental analysis for the high precision is presented.