Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 July 2013, Volume 50 Issue 7
Paper
A LT-Codes-Based Scheme for Improving Data Persistence in Wireless Sensor Networks
Liang Junbin, and Li Taoshen
2013, 50(7):  1349-1361. 
Asbtract ( 602 )   HTML ( 0)   PDF (2568KB) ( 367 )  
Related Articles | Metrics
For a wireless sensor network that does not has a fixed sink and is deployed in Hash environment, each node should disseminate its data to a subset of nodes in the network for storage. By this way, a mobile sink can collect all data even if some nodes die due to accident. A novel LT(Luby transform)-codes-based scheme, named LTSIDP, is proposed, where LT codes are a kind of erasure codes. In LTSIDP, the process of data storage is divided into two steps. In the first step, nodes estimate the number of nodes in the network and the number of data packets by receiving packets for some time. After they acquire the numbers, they can compute a parameter of LT codes. In the second step, nodes store the data they received according to the parameter. After LTSIDP terminates at each round, when a mobile sink enters the network at anytime and anywhere in a given interval, it can collect all data even if it just visits a small number of alive nodes. Theoretical analysis and experiments show that LTSIDP can not only achieve higher data persistence but also has more energy efficiency than previous schemes.
A Chain Routing Algorithm Based on Evidence Theory in Wireless Sensor Networks
Tang Liangrui, Chen Yuanyuan, and Feng Sen
2013, 50(7):  1362-1369. 
Asbtract ( 438 )   HTML ( 0)   PDF (1998KB) ( 529 )  
Related Articles | Metrics
At present, only the remaining energy of nodes is considered when electing the leader node in existing chain routing algorithms. Although EEPB adopts the remaining energy of nodes and the distance between nodes and the base station(BS) to determine which node is qualified to be the leader node, simply weighting these two factors does not completely eliminate the inconsistency of the two decisions. Accordingly, a chain routing algorithm based on evidence theory called CRET is proposed. In the leader node election stage, CRET utilizes two evaluation indexes which are the remaining energy of nodes and the distance between nodes and the BS to confirm which node is chosen as the leader of the chain by Dempster-Shafer (D-S) evidence theory. In the CRET algorithm, two membership functions are respectively established to describe two evaluation indexes, and the basic probability assignment functions of the two indexes are acquired. Then the combination discipline of D-S evidence theory is used to determine the final results of two evaluation indexes. In the chain construction stage, in order to avoid the generation of long chain, the node which has joined the chain is taken into account, and each one is connected to its nearest node. Simulation results show that CRET has superior performance over EEPB on balancing energy consumption among the nodes and extending the network lifetime.
Identity-Based Group Proxy Signature Scheme in the Standard Model
Gu Ke ,, Jia Weijia, Li Chaoliang, and Chen Rongyuan
2013, 50(7):  1370-1386. 
Asbtract ( 387 )   HTML ( 0)   PDF (1436KB) ( 513 )  
Related Articles | Metrics
Current group proxy signature schemes are not proved for their security in the complete security model for group proxy signature, and do not involve the concept of identity. In this paper, we show a complete security model for group proxy signature based on Boldyreva's security model. And a new identity-based group proxy signature scheme is proposed in the standard model based on the Paterson's scheme. In our security model for group proxy signature, the new scheme is proved to have the existential identity-based group proxy signature unforgerability under an adaptive chosen message attack, and have a security reduction to CDH problem. Compared with other group proxy signature schemes based on public key cryptosystem in the standard model, the new scheme is more secure and involves the concept of identity.
Real Value Negative Selection Algorithm with the n-Dimensional Chaotic Map
Zhang Fengbin and Wang Tianbo
2013, 50(7):  1387-1398. 
Asbtract ( 500 )   HTML ( 3)   PDF (6071KB) ( 492 )  
Related Articles | Metrics
In the detector generation phase for traditional chaos negative selection algorithm based on the binary, taking chaotic map generates chaotic sequences, and then doing discrete chaotic sequences generates candidate detectors. This method has many problems which are the bad analysis of knowledge and data, the low detection efficiency, and the low generation rate of detectors and so on. So a chaos negative selection algorithm based on real value is proposed. On the one hand, it leads into chaos theory and takes self-map which is the better chaotic feature to construct N-dimensional chaotic map in order to generate the center of candidate detectors. This improves the traditional generation mechanism of detectors and adapts more to handle high dimensional space problems. On the other hand, it optimizes the original V-detector algorithm and determines the detection radius with the idea of combining the directional movement and calculation of the geometric center. To the greatest extent, the aims are the maximization of the radius value, the expansion of the coverage area and the reduction of the detector quantity under the premise of satisfying the predetermined coverage rate. Experiment results show that the algorithm improves the speed of the detector generation and the detection efficiency of the detector set.
Action-Based Multilevel Access Control for Structured Document
Xiong Jinbo, Yao Zhiqiang, Ma Jianfeng, Li Fenghua, and Li Qi
2013, 50(7):  1399-1408. 
Asbtract ( 725 )   HTML ( 0)   PDF (1014KB) ( 641 )  
Related Articles | Metrics
Cloud computing is a promising computing paradigm which has recently drawn extensive attention from both academia and industry. Meanwhile, structured document plays a vital role as information carrier in cloud computing. Therefore apparently, secure access to structured document is a key technology for the quality control of cloud services. In order to prevent information leakage and unauthorized access to the structured document, which is a common problem caused by lack of the multilevel security mechanism in current cloud computing environment, we propose an action-based multilevel access control model (referred to as the AMAC) and provide a formal description of access control policies. In our AMAC model, we employ noninterference theory in the information flow to establish AMAC noninterference model, and prove the security of multilevel access control policies in our AMAC model. Comparison and analysis with the existing access control models demonstrate that the AMAC model not only improves the flexibility of access control policies on the basis of roles, contexts and access actions, but also realizes multilevel security mechanism in terms of the security levels of the user, the access actions and the structured document.
Rational Secure Two-Party Computation Protocol
Zhang En, and Cai Yongquan
2013, 50(7):  1409-1417. 
Asbtract ( 1001 )   HTML ( 3)   PDF (1158KB) ( 1054 )  
Related Articles | Metrics
In the setting of traditional secure two-party computation, one party may either tell a wrong output to the other party or abort the protocol after he obtains the computation output. So, completed fairness can not be achieved. To address this problem, combining game theory with cryptography, a rational secure two-party computation protocol is proposed in this paper. Firstly, we suppose that any rational party would prefer getting the computation output to not getting it and secondly, it is preferred that as few as possible of the other players get it. Then, game strategies, utilities, and motivations that participants deviate from the protocol or abide by the protocol are researched, and a game model for secure two-party computation is constructed. In the protocol, it is a best strategy for parties to abide by the protocol. The cheating of party can be detected and the gain of the following the protocol is more than the gain of deviating, so rational party has incentive to abiding by the protocol. Finally, every party can obtain the computation result. Analysis shows that the protocol is secure and fair.
Improved Multi-Receiver Signcryption Scheme
Li Huixian, Chen Xubao, Ju Longfei, Pang Liaojun, and Wang Yumin
2013, 50(7):  1418-1425. 
Asbtract ( 815 )   HTML ( 3)   PDF (913KB) ( 585 )  
Related Articles | Metrics
Signcryption is a public key cryptographic primitive that combines the functionalities of encryption and digital signature in a single logical step with low-overhead computation and communication. Some secure problems are found in the existing multi-receiver signcryption scheme, that is, disclosure of the recipients' privacy, unfair de-signcryption and no public verifiability. In order to solve these problems, a new identity-based multi-receiver signcryption scheme is presented by using Lagrange interpolating polynomial in this paper. The proposed scheme has three major features: the anonymous de-signcryption which can protect the recipients' privacy by gathering identity information of all the authorized recipients, the fair de-signcryption which means the same ciphertexts are received by all the authorized recipients, and the public verifiability which ensures that any third parties are able to verify the validity of the sender by the ciphertext only. Moreover, the signer only needs to compute one bilinear paring operation and one exponent operation in the implementation of the proposed scheme. Compared with the existing signcryption schemes, the proposed scheme is more efficient in the computational complexity and ciphertext size. Finally, we prove its semantic security under the hardness of bilinear Diffie-Hellman (BDH) problem and its unforgeability under the computational Diffie-Hellman (CDH) assumption in the random oracle model respectively.
A New Local Space Alignment Algorithm
Liu Shenglan, Feng Lin, Jin Bo, and Wu Zhenyu
2013, 50(7):  1426-1434. 
Asbtract ( 675 )   HTML ( 0)   PDF (3081KB) ( 381 )  
Related Articles | Metrics
Recently, manifold learning has been widely exploited in pattern recognition and data mining. Local tangent space alignment (LTSA) is a classical non-linear manifold learning method, which is efficient for non-linear dimensionality reduction. However, it fails to learn locally high curvature dataset. To address this problem, this paper describes the data set of the locally curvature by the given parameter and presents a new algorithm called locally minimal deviation space alignment (LMDSA). Considering the low-robust deficiencies in local tangent space, LMDSA can find the locally high curvature while computing locally minimal deviation spaces. The algorithm also reduces the probability of locally high curvature space with parameter control and the joint information between neighborhood information. Then the algorithm applies space alignment technique to reduce dimensionality. Besides the advantages above, LMDSA has the ability to learn sparse dataset. Extensive experiments on both synthetic manifold and real-world images indicate the efficiency of our algorithm. In synthetic manifold, LMDSA is compared with LTSA in two local high curvature datasets and one dataset with a hole. The experimental results show our algorithm learns correct manifold structure in low-dimension space. In sparse real-world datasets, LMDSA outperforms other algorithms in this paper.
Self-Adaptive Learning Based Ensemble Algorithm for Solving Matrix Eigenvalues
Xue Yu, Zhuang Yi, Meng Xin, and Zhang Youyi
2013, 50(7):  1435-1443. 
Asbtract ( 718 )   HTML ( 0)   PDF (1518KB) ( 512 )  
Related Articles | Metrics
In order to enhance the performance of numerical optimization algorithm, a self-adaptive-learnings-based ensemble algorithm (SALBEA) is proposed. In the SALBEA, greedy breeding operator, X-evolution operator, population diversity maintaining operator and evolution strategy learning operator are designed to enhance the evolution structure. Besides, a probability model and a self-adaptive learning mechanism are employed to integrate four effective search strategies. Firstly, in order to evaluate the performance of SALBEA, 26 state-of-the-art test functions are solved by the SALBEA and its competitors, and experimental results indicate that the effectiveness and robustness of the SALBEA outperform its competitors. Then, the SALBEA is employed to solve matrix eigenvalue problem. Experimental results show that the precision of the solutions is very high and SALBEA is a promising algorithm in real-world application.
A Kernel and User-Based Collaborative Filtering Recommendation Algorithm
Wang Peng, Wang Jingjing, and Yu Nenghai
2013, 50(7):  1444-1451. 
Asbtract ( 906 )   HTML ( 5)   PDF (1955KB) ( 799 )  
Related Articles | Metrics
With the development of information technology, people can get more and more information nowadays. To help users find the information that meets their needs or interest among large amount of data, personalized recommendation technology has emerged and flourished. As a most widely used and successful recommendation technique, collaborative filtering algorithm has widely spread and concerned many researchers. Traditional collaborative filtering algorithms face data sparseness and cold start problems. As traditional algorithms only consider the limited data, it is difficult to estimate the accurate similarity between users, as well as the final recommendation results. This paper presents a kernel-density-estimation-based user interest model, and based on this model, a user-based collaborative recommendation algorithm based on kernel method is proposed. Through mining users' latent interest suggested by the limited ratings, the algorithm can well estimate the distribution of users' interest in the item space, and provide a better user similarity calculation method. A distance measurement based on classification similarity is proposed for the kernel methods, and two kernel functions are investigated to estimate the distribution of user interest. KL divergence is utilized to measure the similarity of users' interest distribution. Experiments show that the algorithm can effectively improve the performance of the recommendation system, especially in the case of sparse data.
A Fast Clustering Algorithm for Information Retrieval
Liu Ming, Liu Bingquan, and Liu Yuanchao
2013, 50(7):  1452-1463. 
Asbtract ( 633 )   HTML ( 0)   PDF (3516KB) ( 707 )  
Related Articles | Metrics
Due to the fast advance of information retrieval technique, information overload has become a headache problem to Internet users. In order to alleviate user's inconvenience to distinguish useful information from massive junk information, the research for improving retrieval system has gradually become hotter and hotter. Up to now, many techniques have been proposed for automatically categorizing and organizing Web information for users. Among them, clustering is one of the most extensively employed tools. Through clustering retrieval information, Internet users can quickly find out where their interesting retrieval results locate. Unfortunately, traditional clustering algorithms are either ineffective or inefficient for this task. As a result, a novel algorithm specially designed for clustering retrieval information is proposed. This algorithm applies maximum-minimum principle to extract accumulation points to form initial clusters at first. Experiment results show that, this initial cluster partitioning is approximate to the optimal partitioning and only needs small iterative adjustment steps to get convergence. After that, it iteratively adjusts feature set of each cluster to let cluster partitioning more and more precise. Simultaneously, it hierarchically separates the clusters, which don't meet convergence condition, into some sub-clusters to possess the merit of hierarchically representing information. Experiment results also demonstrate that time complexity of this algorithm is close to the recent techniques for clustering retrieval information. Besides, because of iteratively adjusting feature sets, it enables clustering results to be more precise and reasonable.
Emergency Resource Multi-Objective Optimization Scheduling Model and Multi-Colony Ant Optimization Algorithm
Wen Renqiang, Zhong Shaobo, Yuan Hongyong, and Huang Quanyi
2013, 50(7):  1464-1472. 
Asbtract ( 598 )   HTML ( 0)   PDF (1991KB) ( 907 )  
Related Articles | Metrics
Multi-types of emergency resource requirements have been put forward from many disaster-stricken areas after large-scale natural disaster broke out. A multi-objective optimization scheduling model is proposed, which takes into account multiple demand centers, multiple supply centers, multi-types of resources, and supply centers cooperating with each other in providing resources to demand centers. The reliability of scheduling routs is taken into account in the model to enhance the practicability. An optimization algorithm based on multiple ant colony system is designed to solve the model. Then the elite strategy is introduced into the globe pheromone update strategy to guide exchanging and sharing information among multiple ant colony systems, and improve the effect in searching globe no-inferior solutions. Next, a practical approach is provided to solve resources location-allocation problem and scheduling routes planning problem as one integrated problem. Finally the practical example is presented to verify the validity of the model and algorithm, and it is shown that the algorithm can deal with large complex networks well.
Column-Oriented Join Order Optimization in Column Store Systems
Wang Mei, Lu Xuchen, and Le Jiajin
2013, 50(7):  1473-1483. 
Asbtract ( 594 )   HTML ( 0)   PDF (2349KB) ( 473 )  
Related Articles | Metrics
Join is one of the most important operations which can largely affect the efficiency of column store based queries. Most work on column-stores is focused on the improving of storage structure and the building of physical auxiliary structures, while the logical plan optimization, especially early join strategy optimization, has seldom been considered. On the basis of this problem, this paper presents a new join strategy optimization method according to the characteristic of column-oriented storage structure and analytical query. We adopt the early optimization strategy in our method and propose a “fact table push-down” rule. In particular, the bushy tree structure will be considered in the multi-fact-table case to receive a “best” join path with small time and space complexity. Then we provide a cost estimation to verify the correctness of the proposed join strategy optimization method. Finally, experimental results on the large-scale data warehouse benchmark data sets SSB also verify the effectiveness of the early optimization strategy and the proposed push-down rule.
Fractal-Searching-Tree-Based Embedded Wavelet Image Coding
Tang Guowei, Wang Shanshe, Zhang Yan, and Zhao Debin
2013, 50(7):  1484-1490. 
Asbtract ( 657 )   HTML ( 0)   PDF (2054KB) ( 399 )  
Related Articles | Metrics
Compared with fractal image coding, wavelet based fractal image coding can cope with block artifacts and reduce match-searching time effectively. But using fractal coding in low frequency sub-band will lead to poor reconstructed image, and the match-searching time is still the main overhead for fractal image coding. So a fractal-searching-tree-based embedded wavelet image coding algorithm is proposed. The image is decomposed to multiple-level sub-bands by means of Haar wavelet. For the low frequency sub-band, the DPCM coding is applied directly. For the high frequency sub-bands, a self-adaptive approach is adopted to partition each sub-band into different range blocks according to the significance of sub-bands with different size. Then a fractal searching tree structure is constructed to determine the domain pool in which match-searching is carried out in a manner of zigzag scanning. Finally, the arithmetic coding method is employed to encode the fractal parameters obtained. Experimental results show that better reconstructed images are obtained as compared with those by other similar algorithms, and the PSNR is remarkably improved in medium and low bit rate. When the bit rate is less than 0.40bpp, the PSNR is promoted about 0.40~2.48dB averagely. Meanwhile the running time of the algorithm is also reduced.
SOR-Based P/G Solving Algorithm of Linear Parallelism for GPU Computing
Tang Liang, Luo Zuying, Zhao Guoxing, and Yang Xu
2013, 50(7):  1491-1500. 
Asbtract ( 663 )   HTML ( 1)   PDF (2473KB) ( 460 )  
Related Articles | Metrics
Recently some EDA researchers try using the high computing performance of the graphic processing unit (GPU) to speed up IC parameter analysis. In order to apply GPU for efficient analyses of power/ground(P/G) networks, a novel efficient parallel analysis algorithm is proposed based on the traditional successive over-relaxation (SOR) algorithm. According to the accelerating principles of GPU parallel computing, the algorithm has made the following improvements. 1) Use odd/even based red/black strategy to classify all nodes so that all neighbors of a red node are black ones and a black node has all red neighbors. Red nodes and black nodes are relaxed alternatively for data consistency of GPU computing. As for a P/G network of N nodes, one relaxation process of red or black nodes will relax N/2 nodes, which means that the red/black SOR will implement N/2 threads at one time from the viewpoint of theory. 2) Optimize the data structure to implement data coalesced access. 3) Local shared memory is used to parallel reduce relaxation ending flags and the compacted flags are zero copied to the main memory of the computer system at high speed. In turn, CPU can decide fast whether to activate the next round of relaxation kernels or end the relaxation. A large number of experiments demonstrate that compared with the single CPU thread, the speedup times of our algorithm increase linearly as GPU physical threads increase, and the algorithm can provide 242X speedup at maximum. Thus, according to our best knowledge, the algorithm provides the best GPU accelerating efficiency among all present EDA researches.
Description-Logic-Based Feature Modeling and Verification
Shen Guohua, Zhang Wei, Huang Zhiqiu, Zhang Yulong, Jin Lantao, He Wenmin, Jia Zhe, and Zhao Ziyue
2013, 50(7):  1501-1512. 
Asbtract ( 552 )   HTML ( 2)   PDF (2191KB) ( 543 )  
Related Articles | Metrics
The feature model has been widely adopted as a domain requirements capturing model by most of the current domain engineering methods. But these methods lack the semantic description of the feature model and the relationship between features. This has led to the redundancy and confusion in feature model representation between different domain engineering methods. In this paper, a semantic feature model, including feature classes, relationships and constraints among features, is presented based on description logic. Rules for mutex, requirements and conflict constraints are proposed, which are used to verify the consistency and completeness of feature model instances. Then, in the light of a real software domain, the modeling process of the feature model with description logic and its verification are discussed systematically: 1)Ontology editor Protege is used to depict the feature meta-model and models; 2)Rules are described with rule language; 3)Description logic reasoner and query language SPARQL are adopted for consistency and completeness verification. Feature meta-model and rules are domain independent, whereas feature models (that is, the instances of meta-model) are domain dependent, which requires experience of domain experts. This approach will be beneficial to semantic modeling of domain feature models (that is, reduce the inconsistency among different feature modeling methods) and their verification.
An Approach to Analyzing and Resolving Problems in the Problem-Driven Requirements Elicitation
Wang Bo, Zhao Haiyan, Zhang Wei, Jin Zhi, and Mei Hong
2013, 50(7):  1513-1523. 
Asbtract ( 562 )   HTML ( 0)   PDF (2314KB) ( 360 )  
Related Articles | Metrics
Software requirements are stakeholders' expectations for the envisioned software. Researchers propose a problem driven approach to help stakeholders identify the requirements, that is, stakeholders firstly identify the problems of the current software, then find the solutions for the problems, and then elicit requirements based on the solutions. However, stakeholders usually cannot identify objective and consistent solutions, and describe the solutions clearly. Our previous work focuses on the whole process of the problem-driven scenario-based requirements elicitation. In this paper, we enrich our previous work by proposing a collaborative problem analysis and resolution approach, with the purpose of helping stakeholders identify the objective and consistent solutions. The basic idea of the approach is that stakeholders first discuss the understandability, value, and reasons of the problems; then they identify objective solutions through associating reasons to the solutions. To this end, we provide the classifications of problems, a meta-model of problems and collaborative elements, and a collaborative problem analysis process. Moreover, our approach provides mechanisms to help stakeholders handle the interrelated problems, and mechanisms to assess the degree of the collaboration. We have implemented a Web-based tool (i.e., CPART) and used the “University Class System” to conduct case studies, which shows that our approach is useful in practice.
Global Optimal Web Service Selection Model for Multiple Service Requests
Kang Guosheng, Liu Jianxun, Tang Mingdong, and Liu Xiaoqing
2013, 50(7):  1524-1533. 
Asbtract ( 590 )   HTML ( 2)   PDF (2047KB) ( 460 )  
Related Articles | Metrics
Web service selection based on quality of service (QoS) has been one of research focuses in service computing field. Current methods of service selection usually focus on a single service request or multiple service requests for co-selecting a shared service at a time, not considering the competitiveness among multiple independent service requests for the same functional Web services. A global optimal service selection model for multiple service requests, according to the matching degree (between service requests and Web services) and 0-1 integral programming, is proposed to solve the conflicts among service requests. A universal and feasible optimal service selection algorithm, named global optimal service selection for multiple requests (GOSSMR), is proposed to solve the model. Under the condition of meeting QoS requirements of service requests, too many requests selecting the same Web service at the same time can be avoided, thereby optimizing the service resources, avoiding the overload, and improving the performance of the system. The feasibility and effectiveness of the model and algorithm are verified by simulations in our work.
A Semantics-Preserving Amorphous Procedure Extraction Method for C Clone Code
Bian Yixin, Wang Tiantian, Su Xiaohong, and Ma Peijun
2013, 50(7):  1534-1541. 
Asbtract ( 534 )   HTML ( 2)   PDF (1622KB) ( 485 )  
Related Articles | Metrics
Cloned code, also known as duplicated code, is among the bad “code smells”. Procedure extraction can be used to remove clones and make a software system more maintainable. When clone code is extracted out to form procedures, there are some disadvantages that some clone code fragments cannot be extracted directly with the previous syntax-preserving procedure extraction algorithm. To solve these problems, this paper proposes a new semantics-preserving amorphous procedure extraction algorithm. This approach analyzes the program semantic information with program dependence graph and abstract syntax tree. Its characteristic is not syntax-preserving but semantics-preserving, so it can deal with continuous clone code which can not be extracted directly by the traditional method and reduce the need for promoting the unmarked statements, and it need not handle existing jumps that cannot be addressed with the traditional procedure extraction method. The experimental results show that the method can extract the code clones which can't be extracted by the traditional method. It improves the accuracy and adaptability of procedure extraction. We conclude that the proposed algorithm can be a useful contribution to the toolboxes of software managers and developers in the future, which could help them improve code structure and reduce software maintenance and project costs.
Software Reliability Prediction Modeling with Relevance Vector Machine
Lou Jungang, Jiang Jianhui, Shen Zhangguo, and Jiang Yunliang
2013, 50(7):  1542-1550. 
Asbtract ( 686 )   HTML ( 2)   PDF (1897KB) ( 500 )  
Related Articles | Metrics
The relevance vector machines (RVM) is very efficient in solving regression problems. A RVM-based generic model adaptive to the characteristic of the given data set is used for software failure time prediction. The RVM learning scheme is applied to the failure time data, forcing the network to learn and recognize the inherent internal temporal property of software failure sequence in order to capture the most current feature hidden inside the software failure behavior. We also compare the prediction accuracy of software reliability prediction models based on RVM, support vector machine (SVM) and artificial neural network (ANN). Experimental results by four data sets show that our RVM-based software reliability prediction model could achieve higher prediction accuracy than that of the ANN-based or SVM-based models. It is also shown that recent failure data could give more accurate prediction of near future failure event. The reasonable values of m and r can be achieved by experiments.
A Redundancy-Multithread-Based Multiple GPU Copies Fault-Tolerance Technique
Jia Jia, Yang Xuejun, and Li Zhiling
2013, 50(7):  1551-1562. 
Asbtract ( 761 )   HTML ( 1)   PDF (2751KB) ( 393 )  
Related Articles | Metrics
With the increasing of GPGPU's performance, heterogeneous systems that consist of CPUs and GPUs are becoming attractive research hotspots in high-performance computing fields. However, as higher performance is achieved, lower reliability becomes the bottleneck of parallel computing systems that scales up to large size. Since commercial GPGPUs have low fault-tolerance ability, the reliability problem is very acute and lack of practical fault-tolerance solutions in CPU-GPU heterogeneous systems. To address this problem, this paper proposes a redundancy-multithread-based multiple GPU copies fault-tolerance technique: RB-TMR. Towards the programming model of heterogeneous system and the characterization of application, detailed realization and the compiling framework of this fault-tolerance technique for heterogeneous systems are given. In experiments, 10 cases are performed to evaluate this technique's performance, and the results demonstrated that this technique exhibits a novel direction to study fault-tolerance techniques in heterogeneous systems.
Availability-Aware Scheduling Method for Parallel Task in Cloud Environment
Cao Jie, Zeng Guosun, Niu Jun, and Xu Jinchao,
2013, 50(7):  1563-1572. 
Asbtract ( 533 )   HTML ( 1)   PDF (1791KB) ( 653 )  
Related Articles | Metrics
Cloud computing is a newly emerging computing paradigm that advocates supplying users everything as a service. Cloud computing becomes more and more popular in large scale computing and data store recently due to it enables the sharing of computing resources that are distributed all over the world. Nowdays, cloud computing is a hot research area in IT industries and in academic institutes. Particularly, the problem of resource availability cannot be ignored in cloud environment. High availability is a key requirement in the design and development of cloud computing systems where processors operate at different speeds and are not continuously available for computation. This paper addresses the main factors of availability requirement for parallel tasks and the influencing factors of availability support for computational resources based on the graph structure of parallel task and tree cloud platform. We present the formulas to quantify the availability. Through being aware of availability, we realize the availability matching between availability requirement and availability support, and develop two availability-aware scheduling algorithms, that are Afsa and Agsa. The simulation experimental results show that such algorithms can effectively enhance the resource availability and reliability in cloud environment. It is significant to improve the success rate of parallel task scheduling in practice.