Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2013, Volume 50 Issue 5
Paper
A Mobile Strategy for Mobile Full-Coverage Issue
Zhang Le, Li Dong, and Cui Li
2013, 50(5):  901-911. 
Asbtract ( 668 )   HTML ( 1)   PDF (3496KB) ( 443 )  
Related Articles | Metrics
As the development of the WSNs and CPNs, coverage is always the core issue in practice. Due to the impact on environment of monitored region and the limitations inherent in sensors,the monitored region cannot be covered completely by the sensors in practical applications. In order to improve the coverage performance and make the monitored region full covered, diverse mobile equipments can be used to assist the sensors’ coverage by their movement. So a new coverage issue is proposed in this paper, named mobile full-coverage, which indicates how to control the mobile sensors’ movement to cover the whole regions under the sparse coverage circumstance. In this paper, an effective mobile strategy for the mobile full-coverage is presented. At first, the monitored region is divided into several sub-regions on the basis of the mobile devices’ communication region. Secondly, the sub-regions are indicated by a quad-tree, and then the mobile nodes go through all sub-regions according to sequential traversal of the quad-tree. Finally, a related mobile strategy is designed in each sub-region based on the circumstance of covered regions. The experimental results show that the mobile strategy in this paper can achieve the full coverage in the monitored region with the minimum moved distance. Therefore, the strategy is an effective mobile scheme for the issue.
An Adaptive Soft Frequency Reuse Scheme for LTE Systems
Qian Manli, Li Yonghui, Huang Yi, Zhou Yiqing, Shi Jinglin, and Yang Xuezhi5
2013, 50(5):  912-920. 
Asbtract ( 1000 )   HTML ( 5)   PDF (2594KB) ( 443 )  
Related Articles | Metrics
Enhancing cell edge user performance is an important and challenging issue for cellular mobile communication systems. The next generation cellular mobile system, 3GPP long term evolution (LTE), aims at improving cell edge user performance through effective inter-cell interference coordination (ICIC) and spectrum resource allocation schemes. An adaptive soft frequency reuse (ASFR) algorithm is proposed, aiming at optimizing the system throughput while guaranteeing the minimum data rate requirements for both cell center and cell edge regions. The ASFR algorithm can adaptively adjust the major and minor subcarriers, and their respective transmit power in each cell according to the system traffic loads. In the ASFR, the optimal resource allocation in each single cell is first determined by using exhaustive search and greedy descending methods. Then the single cell resource allocation is repeated iteratively among all cells until the system throughput does not change any more. Simulation results show that the ASFR algorithm yields an SFR resource allocation after a number of internal iterations. It is also shown that the ASFR algorithm improves the cell edge user throughput and achieves higher system throughput compared with several traditional frequency allocation schemes. These advantages reveal that the proposed algorithm is more preferred for the LTE system.
A New Evidential Trust Model Based on Graph Theory for Open Computing Systems
Jiang Liming, Zhang Kun, Xu Jian, and Zhang Hong
2013, 50(5):  921-931. 
Asbtract ( 559 )   HTML ( 3)   PDF (2561KB) ( 482 )  
Related Articles | Metrics
An important challenge regarding entity’s trust valuation in open computing systems, such as P2P systems, grid, semantic web, and ad hoc networks, etc., is how to cope with trust transitivity and trust aggregation in the encounter of a large number of malicious entities existing in the system efficiently.However,the trust models employed by the existing systems have some certain disadvantages in the accuracy and reliability of trust metric. Firstly, they ignore the feedback trust measurement on the direct recommender in the process of trust transitivity. Secondly, they contain the problems of information loss or repetitive calculation for their limits in analysis and disposal of the dependent relationships among referral chains. The above reasons also make the deviation between the trust measurement result and actual trust degree increasing. A novel evidential trust model, namely GTETM, a graph theory based evidential trust model, is presented in this paper, and the problems existing in current methods for transferring and aggregating trust relationships are addressed by combning D-S evidence with graph theory. It can be seen from the simulation results that compared with the existing trust metrics, GTETM is more robust on defending against malicious attacks for various strategy cheating and collusion, and has more remarkable enhancements in the accuracy of trust metric.
Logic-Based Dynamical Security Policy Language and Verification
Bao Yibao, Yin Lihua, Fang Binxing, and Guo Li
2013, 50(5):  932-941. 
Asbtract ( 612 )   HTML ( 0)   PDF (2004KB) ( 577 )  
Related Articles | Metrics
Expression, query evaluation and verification are great significant to dynamical security policy management, but they are very difficult to solve at present. This paper proposes a logic system, called SSML, which can be used to declare, evaluate and verify dynamic security policy. Firstly, we propose the syntax and the semantic of SSML (simple state modifying logic) based on logic programming theory, which shows that an SSML security policy can dynamically insert facts into itself or delete facts from itself. Then, we prove that the query evaluation problem of logic systems based on SSML is undecidable in general, which shows that there is no general algorithm to be able to evaluate all SSML security policies. So we turn to research on its sub-languages, and find two types of SSML security policies with limited syntax-NDel security policies and TDel security policies, and they have decidable evaluation algorithms. We actually give their query evaluation algorithms-OLDTE and OLDTT. They are complete and sound, and have polynomial computation complexity. Eventually, we show how to use SSML to verify a security policy through an actual example. We claim that OLDTE and OLDTT can be widely used in security policy management and verification.
Steganalysis of LSB Matching Based on the Measurement of the Region Randomness
Xiong Gang, Ping Xijian, Zhang Tao, and Li Kan
2013, 50(5):  942-950. 
Asbtract ( 671 )   HTML ( 2)   PDF (2515KB) ( 439 )  
Related Articles | Metrics
Currently, image steganalysis has evolved into a significant subject of interest in the field of information security. In accordance with the region stationary characteristic of the image source, a new steganalytic method that can exploit the image region randomness features is proposed based on analyzing the changes of the regional randomness caused by the least significant bit (LSB) matching steganography. First of all, a given image is separated into a great many of block regions, and the regional LSB sequences are extracted from the pixel sequences of each block region which are obtained by the Hilbert scanning. Furthermore, the parameter which is derived from a LSB sequence with the bit XOR operation, is defined as the region randomness measurement index. Finally, the histogram of the index is calculated and analyzed; three sorts of histogram features including the information entropy, the special histogram values, and the moments are extracted; and the Fisher linear discriminator is applied for classifying the cover images and stego images. Extensive experimental results on three grayscale image databases with different embedding rates show that the proposed approach has good performance in detecting LSB matching steganographic scheme, and it obviously outperforms the previous typical steganalytic algorithms.
Hybrid Role Mining Methods with Minimal Perturbation
Zhai Zhigang, Wang Jiandong, Cao Zining, and Mao Yuguang,
2013, 50(5):  951-960. 
Asbtract ( 613 )   HTML ( 3)   PDF (1553KB) ( 452 )  
Related Articles | Metrics
A basic problem in the design of role-based access control (RBAC) system is to automatically discover roles and configure user-role assignment and permission-role assignment. In order to achieve these objectives, researchers have proposed to discover roles from the existing user-permission assignments using data mining techniques, which is called role mining. But most of the existing role mining techniques do not consider the existing RBAC configurations and try to define everything from scratch. The definitions of similarity in the literature do not satisfy the commutative law. In this paper, we formally present a hybrid role mining method, providing deployed roles set using top-down approach and mining candidate role set using bottom-up approach. We propose the measures of weighted structural complexity for the optimality of the RBAC state. We also present the definitions of similarity of role sets for minimal perturbation that satisfy the commutative law and the similarity computation algorithm. Finally, the hybrid role mining algorithm with minimal perturbation is discussed. The algorithm’ computational complexity is analyzed and the effectiveness of the algorithm is evaluated. The evaluation results demonstrate the correctness and effectiveness of our approach.
An Adaptive Shared Filter Ordering Algorithm for Data Stream Systems
Li Jun, Zhang Peng, Guo Li, and Zhou Xiaofei,
2013, 50(5):  961-968. 
Asbtract ( 558 )   HTML ( 1)   PDF (1560KB) ( 501 )  
Related Articles | Metrics
In multimedia stream filtering scenario, there usually exist many filtering queries that specify the filtering objectives and many filters that estimate the filtering queries. A filtering query may connect to several different filters and a filter may connect to several different filtering queries. An open problem in such a filtering scenario is how to order the filters in an optimal sequence so as to decrease the filtering cost. Existing methods are based on a greedy strategy which orders the filters according to three factors of the filters, i.e., the selectivity, popularity, and cost. Although all these methods reported good results, there are still some problems that havent been addressed yet. Firstly, the selectivity factor is set empirically, which can not adaptively adjust with stream passing by. Secondly, the relationships among the three factors are not considerably explored. Under these observations, in this paper, we propose an adaptive hierarchal ordering (AHO) algorithm which executes a two-stage ordering strategy. In the first ordering stage, AHO clusters all the filters according to their popularities and costs factors. In the second stage, for each cluster all the filters are then ordered according to their selectivity. Experiments on both synthetic and real life multimedia streams demonstrate that our AHO method outperforms other simple filtering methods.
Storage Model and Implementation of the Dynamic Ordered Tree
Te Rigen, Li Wei, and Li Xiongfei
2013, 50(5):  969-985. 
Asbtract ( 448 )   HTML ( 0)   PDF (2915KB) ( 434 )  
Related Articles | Metrics
XML, as the representative of the semi-structured data model, has a large number of documentation. It needs more space in storage of ordered sequential trees. That is one of its obvious shortcomings. One of the methods of reducing the storage space is using binary encoding to compress XML’s documentation. A packet structure is proposed, which can not only store the ordered tree space-efficiently, but also realize its dynamic operation. This structure can reduce the amount of modification by separating the binary coded segments of the ordered tree. It can select the modified package rapidly by the triple positioning. For the missing of nodes’ significance after the dynamic operation of the ordered tree, the efficient node number table is presented, which describes the tree indirectly. And it can record the contents and significance of each node by this structure. It also can make up for the short coming that the binary encoding can only express the tree structure. And by establishing an effective ordinal modification table, the table is updated efficiently. It is proved that this structure can realize the space efficient storage of the ordered tree by designing a variety of common operations of the ordered tree and by calculating the various operations of the space and time complexity.
Efficient Top-k Query Processing on Mutual Skyline
Jiang Tao, Zhang Bin, Gao Yunjun, and Yue Guangxue
2013, 50(5):  986-997. 
Asbtract ( 1629 )   HTML ( 2)   PDF (4427KB) ( 485 )  
Related Articles | Metrics
The top-k mutual skyline query returns k data objects among mutual skyline. This query is an important tool for decision support since it provides data analysts an intuitive way for finding significant objects. However, it has not received adequate attention from the research community. In this paper, several new algorithms are introduced, including Topk-TBBS (Topk two step branch and bound skyline), Topk-dMBBS (Topk mutual branch and bound skyline by dynamic skyline), and Topk-wMBBS (Topk mutual branch and bound skyline by Window Query). The main ideas are information reuse and some efficient pruning policies. Especially, Topk-wMBBS has the least number of node accesses as it fully reuses the information during the search and takes advantage of the best first (BF) search policy. Therefore, it obtains the best performance among all algorithms. Meanwhile, we also show that Topk-wMBBS has the optimal efficiency on I/O cost. Finally, we carry out the extensive experiments using two real datasets and four synthetic datasets which follow different distributions. The results show that the proposed algorithms are effective and have a higher efficiency, especially Topk-wMBBS which has at least I/O accesses, varying the number of parameter k, the cardinality of different datasets, and the cache size.
Fuzzy Associative Memories Based on Triangular Norms
Zeng Shuiling, Xu Weihong, and Yang Jingyu
2013, 50(5):  998-1004. 
Asbtract ( 810 )   HTML ( 1)   PDF (878KB) ( 519 )  
Related Articles | Metrics
Some demerits of fuzzy associative memory (FAM) based on Max-T are shown when T is any t-norm, so this type of FAM is extended into a new form. So the classed Max-T FAM where T is now a triangular norm, is the generalization of t-norm. Since a Max-T FAM can be actually a mapping from a vector space to another vector space, the storage ability of the Max-T FAM where T is a triangular norm is partly analyzed with the aid of the analyses of its value domain. Further, a new concept of concomitant implication operator of a triangular norm T is presented here. It is with such concomitant implication operator that a simple general off-line learning algorithm and a general on-line learning algorithm are proposed for a class of the Max-T FAMs based on arbitrary triangular norm T. In this case, T needs no restriction of continuity, strictly increasing, archimedean property, and so on. When T is any triangular norm, it is carefully proved that, if any given multiple pattern pairs can be reliably and perfectly stored in a Max-T FAM, then the two presented learning algorithms can easily give the maximum of all the weight matrices which can be reliably and perfectly stored in the Max-T FAM. Finally, several experiments are given to testify the effectivity of a class of Max-T FAMs and its learning algorithms.
On Cellular Homology Boundary Learning Algorithms
Yang Jiwen, Li Zheshuo, He Shuping, and Xian Min
2013, 50(5):  1005-1011. 
Asbtract ( 640 )   HTML ( 1)   PDF (2346KB) ( 486 )  
Related Articles | Metrics
Boundary partitioning problem is one of the core issues of data analysis. In this paper, homology theory is used to solve the crystal data boundary classification problems. By using homology theory, a cellular homology boundary algorithm, a regular cellular homology boundary algorithm and cohomology boundary learning algorithms are presented and applied to the crystal structure prediction and classification. Because the crystallographic data meet the basic properties of the symmetry group, the paper refers to the homology algebra methods from machine learning point of view to research data in the boundary classification problem. To construct classification model from different angles, the paper first starts with a relatively homology boundary expanded as a local homology and orientated homology, and then goes into the cohomology boundary algorithm and the cellular cohomology boundary algorithm. Finally, this paper extends the Focus factor theorems to regular cellular cohomology, and then the relative manifold. Experimental results show the efficiency of cohomology boundary learning algorithm.
Approximate Degree Reduction Method by Blending of Multi-Triangular Bézier Surfaces with GC\+1 Constraint
Wang Xianghai,, Huang Junying, and Li Ming
2013, 50(5):  1012-1020. 
Asbtract ( 450 )   HTML ( 0)   PDF (2105KB) ( 410 )  
Related Articles | Metrics
Recently, the problem of the approximate degree reduction for triangular surface attracts much attention,and is always a hotspot in the field of CAGD. This paper investigates the approximate multi-degree reduction of triangular Bézier surface by minimizing the defined distance function with GC\+1 constraint on boundary, which includes the following: 1) A kind of algorithm for the degree reduction of triangular Bézier surface is given by minimizing the defined distance function; 2) The approximate degree reduction problem of two triangular Bézier surfaces with GC\+1 constraint is studied, an algorithm of the degree reduction of two surfaces with GC\+1 constraint is proposed by adjusting the second line control vertices nearby the boundary and; 3) The approximate degree reduction of multi-triangular Bézier surface with GC\+1 constraint is studied by adjusting the internal control points. Firstly, we confirm some groups of internal control points after each two triangular Bézier surfaces approximate degree reduction with GC\+1 constraint, and then structure blending function and constructing a new blending format for approximating multi-degree reduction surface. Finally, It is proved in theory that the new triangular Bézier surface and its surrounding surfaces still keep GC\+1 constraint. The simulation results prove that the proposed algorithm is practical and efficient.
Solving the Stereo Matching in Disparity Image Space Using the Hopfield Neural Network
Xu Sheng, Ye Ning, Zhu Fa, Xu Shanshan, and Zhou Liuliu
2013, 50(5):  1021-1029. 
Asbtract ( 570 )   HTML ( 2)   PDF (3405KB) ( 374 )  
Related Articles | Metrics
As the accuracy of boundary region is the bottleneck of stereo matching, here a feature-based matching algorithm is used to focus on the matching problem in the boundary. Firstly, for the stereo matching, we propose a new boundary feature extraction algorithm based on the RBF, and make the boundary as the region to be matched. After considering various constraints the matching needs to meet, we build a new energy function, and then use the Hopfield neural network (HNN) to do the optimization. Because the object area is all of the boundary, so it is not wise to input too many neurons to the network. To further reduce the complexity of the algorithm, it is proposed to construct a network model from the disparity image space, which is an innovative approach. Finally, a large number of experiments are conducted to validate the algorithm, including the standard images, the noise images and the real scene images. Experimental results show that the new algorithm can greatly improve the accuracy of the boundary so as to overcome the bottleneck. At the same time, the accuracy of the whole region will be also significantly improved. When tested in complex cases, the algorithm can still get a good result, so it can be said that the robustness and performance of the algorithm are quite good.
A Technique of Multiple Fault Localization Based on Conditioned Execution Slicing Spectrum
Wen Wanzhi, Li Bixin, Sun Xiaobing, and Qi Shanshan
2013, 50(5):  1030-1043. 
Asbtract ( 556 )   HTML ( 0)   PDF (2274KB) ( 541 )  
Related Articles | Metrics
Program spectrum-based software fault localization (PS-SFL) has become one of the hottest research directions due to its high efficiency of localizing faults. It usually localizes program faults by computing the suspiciousness of program statements according to the test coverage information. However, the efficiency of this technology will decrease with the increase of the number of faults in the program. One reason is that the suspiciousness of program statement is computed by counting the number of failed tests and passed tests, but the failed tests that covered the program statement may be caused by different faults. This paper proposes a multiple fault localization (CESS-MFL) technique, which is based on conditioned execution slicing spectrum to alleviate the above problem and improve the efficiency of localizing multiple faults. The CESS-MFL technique firstly constructs a spectrum matrix of fault-related conditioned execution slice according to the predicates of input variables, then computes the suspiciousness of elements (statements or blocks of statements) in fault-related conditioned execution slice and generates a suspiciousness report. The experiment shows that the CESS-MFL technique has higher efficiency than the popular program spectrum-based Tarantula technique, program slicing-based Intersection technique and Union technique in the multiple-fault program, and can be implemented in effective space and time complexity.
An Automatic Program Verifier for PointerC: Design and Implementation
Zhang Zhitian, Li Zhaopeng, Chen Yiyun, and Liu Gang
2013, 50(5):  1044-1054. 
Asbtract ( 685 )   HTML ( 0)   PDF (1578KB) ( 444 )  
Related Articles | Metrics
Formal verification is a major method to improve the dependability of software. One of the hot research areas is automatic program verification based on logical inference. So far there is no product which can be used directly in the industries, and the root of this problem lies in the slow development and difficulties of automated theorem proving. Our approach is based on the observation that program analysis methods can be used in collecting global information to support program verification. We build shape graphs at each program point in the program analysis phase. A method is proposed, which uses regular Hoare logic rules to reason about assignment not dealing with pointers in a C-like programming language. Such rules will be applied after aliasing is eliminated using the information of shape graphs. The soundness of the logic system has been proved. Furthermore, an approach is presented to verify data constraints on mutable data structures without using user-defined predicates. A prototype of our program verifier has been implemented for the PointerC programming language. We have used it in the verification of programs manipulating recursive data structures, such as lists and trees, and programs dealing with one-dimension arrays.
An Agent-Based Requirements Monitoring Framework for Internetware
Fu Lingxiao, Peng Xin, and Zhao Wenyun
2013, 50(5):  1055-1065. 
Asbtract ( 493 )   HTML ( 0)   PDF (2825KB) ( 413 )  
Related Articles | Metrics
Running in a complicated, open and highly-dynamic environment, Internetware systems are likely to deviate from their requirements specification. In recent years, there have been a series of researches on runtime requirements monitoring and self-repairing based on goal-oriented requirements models and goal reasoning. However, a practical implementation framework for requirements monitoring and repairing, which supports typical Internetware characteristics like distribution and sociality, is still missing. In this paper, we propose an agent-based requirements monitoring framework for Internetware. The monitoring agents in the framework are able to monitor host systems on internal goal satisfaction and cross-agent goal delegation at runtime, and perform actuate repairing actions based on customized policies when requirements deviations are detected in a non-intrusive manner. The framework organizes monitoring agents in a decentralized way and supports cross-system goal delegation, requirements monitoring and self-repairing with inter-agent communication and interaction. To evaluate the effectiveness of our framework, we’ve conducted a case study with an online product booking system. The results show that the framework can effectively alleviate potential system failures in various self-reparing scenarios.
A Personalized Web Service Quality Prediction Approach Based on Invoked Feature Model
Zhang Li, Zhang Bin, Huang Liping, and Zhu Zhiliang
2013, 50(5):  1066-1075. 
Asbtract ( 560 )   HTML ( 0)   PDF (1843KB) ( 465 )  
Related Articles | Metrics
With the number of Web services increasing exponentially, many Web services available have very similar functionality. One approach to identify the most suitable services for a user is to evaluate the QoS of these services. A personalized QoS prediction method is proposed which exploits network, server environment and user input. It analyzes users’ previous behaviors on the Web and extracts the service invoked feature patterns, and then predicts QoS through the information under the invoked feature patterns and user invoking features. Experimental results show that the proposed method can improve the accuracy of the QoS prediction significantly.
A Collaborative Filtering Recommendation Algorithm Based on Double Neighbor Choosing Strategy
Jia Dongyan and Zhang Fuzhi
2013, 50(5):  1076-1084. 
Asbtract ( 793 )   HTML ( 2)   PDF (1911KB) ( 905 )  
Related Articles | Metrics
Collaborative filtering is the most successful and widely used recommendation technology in E-commerce recommender system. It can recommend products for users by collecting the preference information of similar users. However, the traditional collaborative filtering recommendation algorithms have the disadvantages of lower recommendation precision and weaker capability of attack-resistance. In order to solve the problems, a collaborative filtering recommendation algorithm based on double neighbor choosing strategy is proposed. Firstly, on the basis of the computational result of user similarity, the preference similar users of target user are chosen dynamically. Then the trust computing model is designed to measure the trust relation between users according to the ratings of similar users. The trustworthy neighbor set of target user is selected in accordance with the degree of trust between users. Finally, a novel collaborative filtering recommendation algorithm based on the double neighbor choosing strategy is designed to generate recommendation for the target user. Using the MovieLens and Netflix dataset, the performance of the novel algorithm is compared with that of others from both sides of recommendation precision and the capability of attack-resistance. Experimental results show that compared with the existing algorithms, the proposed algorithm not only improves the recommendation precision, but also resists the malicious users effectively.
Error Detection by Redundant Transaction in Transactional Memory System
Song Wei, and Yang Xuejun
2013, 50(5):  1085-1099. 
Asbtract ( 612 )   HTML ( 1)   PDF (4208KB) ( 399 )  
Related Articles | Metrics
This paper addresses the issue of error detection in transactional memory, and proposes a new method of error detection based on redundant transaction (EDRT). This method creates a transaction copy for every transaction, and executes both original transactions and transaction copies on adequate processor cores, and achieves error detection by comparing the execution results. EDRT utilizes the data-versioning mechanism of transactional memory to achieve the acquisition of an approximate minimum error detection comparison data set, and the acquisition is transparent and online. This paper applies EDRT to the log-based transactional memory, and proposes the solution to the designing problems of the fault-tolerant log-based transactional memory. Finally, this paper validates the EDRT through five test programs, including four SPLASH-2 benchmarks. The experimental results show that the average error detecting cost is about 3.68% relative to the whole program, and it is only about 12.07% relative to the transaction parts of the program. Compared with the dual modular redundancy error detection mechanism (DMR), the average error detecting cost ratio between the EDRT and the DMR is only about 0.05%.
An Approach for Monitoring Memory Address Traces with Functional Semantic Information
Chen Licheng, Cui Zehan, Bao Yungang, Chen Mingyu, Shen Linfeng, and Liang Qi
2013, 50(5):  1100-1109. 
Asbtract ( 539 )   HTML ( 0)   PDF (2820KB) ( 348 )  
Related Articles | Metrics
Accurate monitoring memory traces of applications running on real systems is the basis of memory system scheduling and architecture optimization. HMTT is a hybrid hardware/software memory traces tracker system, which is able to track full-system memory traces in real time. But there exists a semantic gap between memory traces and high level application events, such as synchronization problem between upper functional execution flow and memory traces. In this paper, we propose a hybrid hardware/software approach for monitoring memory traces with functional level semantic information. It directly modifies the process image in memory with binary instrumentation at the beginning of a process, by respectively inserting an extra tag memory access instruction at the entry and exit of each target function, which will be tracked and identified by HMTT. With binary instrumentation, there is no need to require source code of applications, no need to re-compile or re-link applications, and the run time overhead is quite low. The experimental results show that our hybrid hardware/software approach can effectively track memory traces with functional semantic information. For memory intensive applications in SPECCPU2006, the average run time overhead is 62%, and the pure software approach run time overhead with Pin is at least 10.4 times, compared with the original run time.
Parallel Simulation of Many-Core Processor and Many-Core Clusters
Lü Huiwei, Cheng Yuan, Bai Lu, Chen Mingyu, Fan Dongrui, and Sun Ninghui
2013, 50(5):  1110-1117. 
Asbtract ( 790 )   HTML ( 0)   PDF (2976KB) ( 596 )  
Related Articles | Metrics
Computer architecture simulator is an important tool for computer architecture researchers. Recent development of parallel architectures bring great challenge to computer simulations. On the target side, as processors move towards multi-core and many-core, the complexity of the target system is doubling in the speed of Moore’s law as the simulated target core number grows; on the host side, the speed of sequential simulation is halted as the speed of a single host processor halts. Due to the above two reasons, sequential simulation could no longer meet the challenge of new parallel architectures. In this paper, we will describe the necessity and feasibility of parallel simulation for parallel computer architectures using two examples: a many-core processor simulator and a many-core cluster simulator. For many-core processor simulator, we use parallel discrete event simulation (PDES) to speed it up 10.9 times without accuracy lost. For many-core cluster simulation, we simulated a cluster at 1024-core scale, with MPI/Pthreads runtime support.
TDDS:Task Deployment and Scheduling Based on Virtual Cluster System
Feng Lin, Fu Yong, Chen Kang,and Zheng Weimin
2013, 50(5):  1118-1124. 
Asbtract ( 638 )   HTML ( 0)   PDF (1675KB) ( 527 )  
Related Articles | Metrics
The traditional computer cluster is not easy for management. For example, it is hard to install the systems, lack of secure isolation of different users; it cannot construct the customized environment according to the requirement of applications and users etc. Virtual machines can be used to tackle all these problems. A system called TDDS(task deployment and dispatch system) based on virtual machines has been designed and implemented. TDDS can build up the running environment according to the requirement from users and applications. In addition, TDDS builds the automatic application deployment and schedule system as well as load balancing among physical servers in the cluster. The main contribution of this dissertation includes:1) Two different virtual machine setup methods are proposed for speeding up the application deployment based on customized virtual machines. 2) The task deployment and schedule mechanism are designed and implemented based on the virtual machines. TDDS can construct the virtual environment for distributed applications and schedule the tasks on the virtual platform.3) Several experiments have been conducted to test the effectiveness and efficiency of computing environment and resources customized for user applications. The experiments indicate that TDDS can do the task deployment and scheduling efficiently. The load balancing schemes can be used to improve the processing capability of physical clusters i.e. achieving the goal of improving the usage efficiency.