Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 May 2016, Volume 53 Issue 5
Unbiased Sampling Technologies on Online Social Network
Wang Dong, Li Zhenyu, Xie Gaogang
2016, 53(5):  949-967.  doi:10.7544/issn1000-1239.2016.20148387
Asbtract ( 918 )   HTML ( 1)   PDF (3900KB) ( 772 )  
Related Articles | Metrics
As the popular platform for content sharing and information diffusion, online social network (OSN), such as Facebook and Twitter, have attracted massive researchers in analysis. While using complete datasets provided by the OSN companies can generate the best results, it is hard, if possible, for researchers to get such datasets as most OSN companies are reluctant to share their data in order to protect the users’ privacy. Besides, it may require unreasonable time to get the results in analysis, given the huge amount of data. The alternative is to obtain features of the complete networks based on representative samples. Therefore, how to get unbiased samples or make unbiased estimations on OSN becomes the key premise of OSN research. A general summary of the unbiased sampling technologies on OSN is provided. The general necessary and sufficient condition for unbiased sampling of large-scale networks is studied mathematically at first, and then the performances of the widely-used sampling technologies are compared from the perspectives of sampling principle, sampling bias and sampling efficiency. Finally, the trend in development of sampling technologies on OSN is discussed. This summary can provide the OSN researchers with a valuable reference for use and analysis of sampling technologies.
Research and Development of Moving Target Defense Technology
Cai Guilin, Wang Baosheng, Wang Tianzuo, Luo Yuebin, Wang Xiaofeng, Cui Xinwu
2016, 53(5):  968-987.  doi:10.7544/issn1000-1239.2016.20150225
Asbtract ( 1925 )   HTML ( 12)   PDF (2429KB) ( 1705 )  
Related Articles | Metrics
Nowadays, network configurations are typically deterministic, static, and homogeneous. These features reduce the difficulties for cyber attackers scanning the network to identify specific targets and gather essential information, which gives the attackers asymmetric advantages of building up, launching and spreading attacks. Thus the defenders are always at a passive position, and the existing defense mechanisms and approaches cannot reverse this situation. Moving target defense (MTD) is proposed as a new revolutionary technology to alter the asymmetric situation of attacks and defenses. It keeps moving the attack surface of the protected target through dynamic shifting, which can be controlled and managed by the administrator. In this way, the attack surface exposed to attackers appears chaotic and changes over time. Therefore, the work effort, i.e., the cost and complexity, for the attackers to launch a successful attack, will be greatly increased. As a result, the probability of successful attacks will be decreased, and the resiliency and security of the protected target will be enhanced effectively. In this paper, we firstly introduce the basic concepts of MTD, and classify the related works into categories according to their research field. Then, under each category, we give a detailed description on the existing work, and analyze and summarize them separately. Finally, we present our understandings on MTD, and summarize the current research status, and further discuss the development trends in this field.
Information Hiding Algorithm of IP Covert Timing Channels and Its Performance Analysis
Wang Changda, Huang Lei, Liu Zhifeng
2016, 53(5):  988-999.  doi:10.7544/issn1000-1239.2016.20150063
Asbtract ( 787 )   HTML ( 1)   PDF (2438KB) ( 695 )  
Related Articles | Metrics
Covert channel analysis is one of the mandatory requirements of high-level trust evaluations. That IP covert timing channels utilize “time” as media to carry messages makes the eradication of IP covert timing channels on packets-switched networks, which is almost impossible. Hitherto, lack of a general mathematical model makes IP covert timing channels to be a tough job by which implement anonymous communication or information hiding among packets flows. As a result, in the past a few years, most of related works depended on the experiments and observations only. Based on the physical definition of time, IP covert timing channels are categorized as three types according to their different working methods. Furthermore, the mathematical models of IP covert timing channels of fixed-length time slots and inter-packets delays are built through the probability theory, respectively. In addition, the bandwidth function and error rate function of the network parameters for IP covert timing channels are derived. Experimental results show the correctness of the mathematical models as well as the theoretical analysis conclusions in the paper. The models of IP covert timing channels have formed a base on which some of researches in this area can be done through the formal analysis instead of the experimental observations only.
An Efficient Privilege Manage Method Based on RBAC
Luo Jun, Zhao Chuanzhi, Wang Fei
2016, 53(5):  1000-1008.  doi:10.7544/issn1000-1239.2016.20148288
Asbtract ( 840 )   HTML ( 2)   PDF (1620KB) ( 619 )  
Related Articles | Metrics
According to the exponential complexity and exile management measures of the most role-based access control model and algorithm when some services are requested, an efficient privilege manage method is put forward. After simplify system model (SSM) sets up, this paper proposes the privilege-based access control list (PBACL) and role plus(RP) aiming at managing the service authority effectively and more safely, then set up the structure of role relationship which divides the roles into transverse and longitudinal aiming at fast finding out the relationship of the roles. In view of different service request, the system can manage key privilege and correlated role by using PBACL; seek out the most appropriate roles set that satisfy the external service request by the user-defined RP algorithm, whose complexity is polynomial; resolve the conflict effectively because the key privilege is assigned to multiple users; support privilege fast reconstructed in the unstable systems. The scheme also adapts to access control in multi-domain. It can effectively avoid the problem that the key privilege is distributed to more than one user in multi-domain, and can also fast check out the security conflicts brought out by the roles mapped to other domains such as the cyclic inheritance conflict, the separation of duty, and so on.
Evaluation of Vulnerability Severity Based on Rough Sets and Attributes Reduction
Fu Zhiyao, Gao Ling, Sun Qian, Li Yang, Gao Ni
2016, 53(5):  1009-1017.  doi:10.7544/issn1000-1239.2016.20150065
Asbtract ( 736 )   HTML ( 1)   PDF (1171KB) ( 682 )  
Related Articles | Metrics
Computer vulnerability is a major hidden danger which endangers the safety of the network, and will attack the system by system configuration mistakes, system design flaws or software bugs. Due to a variety of factors which can produce vulnerability, there are many attributes associated with vulnerability, and it is difficult to shift attributes which are more relevant. It is also a hard problem to calculate attribute weights objectively which doesn’t depend on expert experience or prior knowledge. A new method named RAR of vulnerability assessment is proposed to shift vulnerability attributes and evaluate severity objectively. The attributes reduction for decision-making of vulnerability assessment is found depended on the discriminate matrix in rough sets theory. Then evaluate the vulnerability severity based on attributes comprehensive evaluation system theory. Finally we can get a binary group to represent qualitative evaluation and quantitative evaluation value of vulnerability. The result shows this method avoids the subjective choice for vulnerability attributes and the dependence of experts prior knowledge, and it satisfies for attributes reduction and attribute weights. And it is also accurate and effective for qualitative analysis and quantitative analysis of the vulnerability.
An Attribute Weighted Clustering Algorithm for Mixed Data Based on Information Entropy
Zhao Xingwang Liang Jiye
2016, 53(5):  1018-1028.  doi:10.7544/issn1000-1239.2016.20150131
Asbtract ( 1041 )   HTML ( 0)   PDF (1068KB) ( 912 )  
Related Articles | Metrics
In real applications, mixed data sets with both numerical attributes and categorical attributes at the same time are more common. Recently, clustering analysis for mixed data has attracted more and more attention. In order to solve the problem of attribute weighting for high-dimensional mixed data, this paper proposes an attribute weighted clustering algorithm for mixed data based on information entropy. The main work includes: an extended Euclidean distance is defined for mixed data, which can be used to measure the difference between the objects and clusters more accurately and objectively. And a generalized mechanism is presented to uniformly assess the compactness and separation of clusters based on within-cluster entropy and between-cluster entropy. Then a measure of the importance of attributes is given based on this mechanism. Furthermore, an attribute weighted clustering algorithm for mixed data based on information entropy is developed. The effectiveness of the proposed algorithm is demonstrated in comparison with the widely used state-of-the-art clustering algorithms for ten real life datasets from UCI. Finally, statistical test is conducted to show the superiority of the results produced by the proposed algorithm.
A Dynamic Data Stream Clustering Algorithm Based on Probability and Exemplar
Bi Anqi, Dong Aimei, Wang Shitong
2016, 53(5):  1029-1042.  doi:10.7544/issn1000-1239.2016.20148428
Asbtract ( 837 )   HTML ( 0)   PDF (5463KB) ( 593 )  
Related Articles | Metrics
We propose an efficient probability drifting dynamic α-expansion clustering algorithm, which is designed for data stream clustering problem. In this paper, we first develop a unified target function of both affinity propagation (AP) and enhanced α-expansion move (EEM) clustering algorithms, namely the probability exemplar-based clustering algorithm. Then a probability drifting dynamic α-expansion (PDDE) clustering algorithm has been proposed considering the probability framework. The framework is capable of dealing with data stream clustering problem when current data points are similar with pervious data points. In the process of clustering, the proposed algorithm ensures that the clustering result of current data points is at least comparable well with that of previous data points. What’s more, the proposed algorithm is capable of dealing with two kinds of similarities between current and previous data points, that is whether current data points share some points with previous data points or not. Besides, experiments based on both synthetic (D31, Birch 3) and real-world dataset (Forest Covertype, KDD CUP99) have indicated the capability of PDDE in clustering data streams. The advantage of the proposed clustering algorithm in contrast to both AP and EEM algorithms has been shown as well.
Object Retrieval Based on Enhanced Dictionary and Spatially-Constrained Similarity Measurement
Zhao Yongwei, Zhou Yuan, Li Bicheng
2016, 53(5):  1043-1052.  doi:10.7544/issn1000-1239.2016.20150070
Asbtract ( 710 )   HTML ( 0)   PDF (4037KB) ( 501 )  
Related Articles | Metrics
Bag of visual words model based object retrieval methods have several problems, such as low time efficiency, the low distinction of visual words and the weakly visual semantic resolution because of missing spatial information and quantization error. In this article, an object retrieval method based on enhanced dictionary and spatially-constrained similarity measurement is proposed aiming at the above problems. Firstly, E\+2LSH (exact Euclidean locality sensitive hashing) is used to identify and eliminate the noise key points and similar key points, consequently, the efficiency and quality of visual words are improved; Then, the stop words of dictionary are eliminated by chi-square model (CSM) to improve the distinguish ability of visual dictionary; Finally, the spatially-constrained similarity measurement is introduced to accomplish object retrieval, furthermore, a robust re-ranking method with the K-nearest neighbors of the query for automatically refining the initial search results is introduced. Experimental results indicate that the quality of visual dictionary is enhanced, and the distinguish ability of visual semantic expression is effectively improved and the object retrieval performance is substantially boosted compared with the traditional methods.
A Fast Partial Label Learning Algorithm Based on Max-loss Function
Zhou Yu, He Jianjun, Gu Hong, Zhang Junxing
2016, 53(5):  1053-1062.  doi:10.7544/issn1000-1239.2016.20150267
Asbtract ( 866 )   HTML ( 0)   PDF (1800KB) ( 514 )  
Related Articles | Metrics
In the age of big data, learning with weak supervision has become one of the hot research topics in machine learning field. Partial label learning, which deals with the problem where each training example is associated with a set of candidate labels among which only one label corresponds to the ground-truth, is an important weakly-supervised machine learning frameworks proposed recently and can be widely used in many real world tasks. The max-loss function may be used to accurately capture the relationship between the partial labeled sample and its labels. However, since the max-loss function usually brings us a nondifferentiable objective function difficult to be solved, it is rarely adopted in the existing algorithms. Moreover, the existing partial label learning algorithms can only deal with the problem with small-scale data, and rarely can be used to deal with big data. To cure above two problems, this paper presents a fast partial label learning algorithm with the max-loss function. The basic idea is to transform the nondifferentiable objective to a differentiable concave function by introducing the aggregate function to approximate the max(·) function involved in the max-lass function, and then to solve the obtained concave objective function by using a stochastic quasi-Newton method. The experimental results show that the proposed algorithm can not only achieve higher accuracy but also use shorter computing time than the state-of-the-art algorithms with average-loss functions. Moreover, the proposed algorithm can deal with the problems with millions samples within several minutes.
A Clustering Algorithm for Large-Scale Categorical Data and Its Parallel Implementation
Ding Xiangwu, Guo Tao, Wang Mei, Jin Ran
2016, 53(5):  1063-1071.  doi:10.7544/issn1000-1239.2016.20148422
Asbtract ( 1092 )   HTML ( 0)   PDF (2123KB) ( 770 )  
Related Articles | Metrics
CLOPE algorithm has achieved good results in clustering large, sparse categorical datasets with high dimensions. However, it is hard to stably find the global optimal clusters since the data order can affect the result of clustering. To deal with this problem, this paper proposes p-CLOPE algorithm iteratively dividing input data into multiply equal parts and then clustering their different permutations. In each iteration of p-CLOPE algorithm, the input dataset is split into p parts and they are permuted into p! datasets with different part orders, then each dataset is clustered and the optimal clustering is chosen according to the profit as the input of next iterations. In order to handle time complexity of the process, a result reusing strategy is put forward that can improve the speed of clustering, further. Finaly, a distributed solution is put forward that implements p-CLOPE on Hadoop platform and a clustering tool is developed which has been released to the open source community. Experiments show that p-CLOPE can achieve better results than CLOPE. For the Mushroom dataset, when CLOPE achieves optimal results, p-CLOPE can achieve 357% higher profit value than CLOPE. When dealing with big data, parallel p-CLOPE greatly shortens the computing time compared with serial p-CLOPE, and it achieves nearly p! speedup when there is enough computing resource.
Detecting Infeasible Paths Based on Branch Correlations Analysis
Jiang Shujuan, Han Han, Shi Jiaojiao, Zhang Yanmei, Ju Xiaolin, Qian Junyan
2016, 53(5):  1072-1085.  doi:10.7544/issn1000-1239.2016.20148031
Asbtract ( 771 )   HTML ( 1)   PDF (2855KB) ( 545 )  
Related Articles | Metrics
The existence of infeasible paths causes a waste of test resources in software testing. Detecting these infeasible paths effectively can save test resources and improve test efficiency. Since the correlation of different conditional statements is the main reason of causing infeasible paths of a program and it costs effort for attempting to cover these paths which are never executed during software testing, the determination of branch correlations plays an important role in detecting infeasible paths. The paper proposes a new approach for detecting the infeasible paths based on association analysis and data flow analysis. Firstly, it builds the data-sets that reflect the static dependencies and the dynamic execution information of conditional statements by combining static analysis with dynamic analysis; then, with two types of branch correlations (called A-B correlation and B-B correlation) defined, it determines the branch correlations respectively with two introduced algorithms which are based on association analysis and data flow analysis; finally, it detects the infeasible paths in accordance with the obtained and refined branch correlations. The paper applies the proposed approach to some benchmarks programs and industry programs to validate its efficiency and effectiveness. The experimental results indicate that our approach can detect infeasible paths accurately and improve the efficiency of software testing.
Solving Cost Prediction Based Search in Symbolic Execution
Liu Jingde, Chen Zhenbang, Wang Ji
2016, 53(5):  1086-1094.  doi:10.7544/issn1000-1239.2016.20148330
Asbtract ( 841 )   HTML ( 1)   PDF (1658KB) ( 513 )  
Related Articles | Metrics
In symbolic execution, constraint solving needs a large proportion of execution time. The solving time of a constraint differs a lot with respect to the complexity, which happens a lot when analyzing the programs with complex numerical calculations. Solving more constraints within a specified time contributes to covering more statements and exploring more paths. Considering this feature, we propose a solving cost prediction based search strategy for symbolic execution. Based on the experimental data of constraint solving, we conclude an empirical formula to evaluate the complexity of constraints, and predict the solving cost of a constraint combined with historical solving cost data. The formula is used in our strategy to explore the paths with a lower solving cost with a higher priority. We have implemented our strategy in KLEE, a state-of-art symbolic executor for C, and carried out the experiments on the randomly selected 12 modules in GNU Scientific Library (GSL). The experimental results indicate that: in a same period, compared with the existing strategy, our strategy can explore averagely 24.34% more paths, without sacrificing the statement coverage; and our strategy can find more bugs. In addition, the time of using our strategy for finding same bugs decreases 44.43% in average.
Reasoning on Constraints for Incoherent Operations in Metadata Repository Systems
Zhao Xiaofei, Gao Yang, Shi Yinghuan, Shi Zhongzhi
2016, 53(5):  1095-1105.  doi:10.7544/issn1000-1239.2016.20148461
Asbtract ( 670 )   HTML ( 0)   PDF (2526KB) ( 392 )  
Related Articles | Metrics
The architecture of the repository system metadata is hierarchical, multi-layer, dynamically changed and complicated; the prevailing repository system specifications provide insufficiently support for validating well-formedness constraints, so how to check well-formedness constraints for MOF (meta object facility) metadata repository systems becomes a difficult problem. This paper presents an approach which can reasoning on the well-formedness constraints in the different layers, thus the operations that may violate the constraints can be determined automatically. Firstly, a group of inner activities which are more accurate and adaptable than establishment activities provided by MOF are proposed. We define the correspondence between inner activities and establishment activities. Then we research how to reasoning on the constraints directly so that the incoherent inner activities can be detected. At last, we research how to deduce potentially incoherent operations by determining the establishment activities that potentially violate the constraints and testing whether these activities are included in the operations. Our approach can improve the efficiency of well-formedness constraint checking because the precise set of constraints that will be violated by operations can be reduced in the actual checking process. In addition, our method also helps to the constraint design process since we can discard the constraint if we find it can never be violated by any operations.
Accurate Histogram Release under Differential Privacy
Zhang Xiaojian, Shao Chao, Meng Xiaofeng
2016, 53(5):  1106-1117.  doi:10.7544/issn1000-1239.2016.20150304
Asbtract ( 1052 )   HTML ( 3)   PDF (3344KB) ( 893 )  
Related Articles | Metrics
Grouping-based differentially private histogram release has attracted considerable research attention in recent years. The trade-off between approximation error caused by the group’s mean and Laplace error due to Laplace noise constrains the accuracy of histogram release. Most existing methods based on grouping strategy cannot efficiently accommodate the both errors. This paper proposes an efficient differentially private method, called DiffHR (differentially private histogram release) to publish histograms. In order to boost the accuracy of the released histogram, DiffHR employs Metropolis-Hastings method in MCMC (Markov chain Monte Carlo) and the exponential mechanism to propose an efficient sorting method. This method generates a differentially private histogram by sampling and exchanging two buckets to approximate the correct order. To balance Laplace error and approximation error efficiently, a utility-driven adaptive clustering method is proposed in DiffHR to partition the sorted histogram. Furthermore, the time complexity of the clustering method is O(n). DiffHR is compared with existing methods such as GS, AHP on the real datasets. The experimental results show that DiffHR outperforms its competitors, and achieves the accurate results.
Free Form Deformation Method of Parametric Surfaces on Rectangular Region
Zhang Li, Ge Xianyu, Tan Jieqing
2016, 53(5):  1118-1127.  doi:10.7544/issn1000-1239.2016.20148312
Asbtract ( 835 )   HTML ( 3)   PDF (3736KB) ( 578 )  
Related Articles | Metrics
According to free form deformation of parametric surfaces, a new method based on extension function is proposed. It is made by piecewise polynomials and defined on rectangular region. Based on these, the new extension factor we constructed not only possesses perfect properties such as symmetry, single peak, linear peak and region peak, but also holds some parameters which have obvious geometric meanings. In real applications, the extension factor is very suitable for interactive design due to these properties and extraordinary parameters. Furthermore, the new extension factor is easy to construct and convenient to control because of its simple polynomial forms. Applying the deformation matrix constructed by the new extension factor to the original surfaces’ equations, plentiful deformation surfaces can be achieved. Deformation effects such as shearing, concave & convex, expand & contract and changing of deformation center can be obtained. With the help of superimposing, more complex deformation effects can be realized. Lots of numerical experiments are given at the end of paper which illustrate different kinds of deformation effects and plentiful adjusting effects of different parameters.
Texture Image Classification with Noise-Tolerant Local Binary Pattern
Ji Zhong Nie Linhong
2016, 53(5):  1128-1135.  doi:10.7544/issn1000-1239.2016.20148320
Asbtract ( 846 )   HTML ( 5)   PDF (1543KB) ( 732 )  
Related Articles | Metrics
The local binary pattern (LBP) is a simple and effective texture descriptor. However, it is very sensitive to image noise. To deal with this problem, we propose an efficient texture feature named noise-tolerant complete enhanced local binary pattern (CELBP\+{NT}) to enhance the discriminant ability against the noisy texture images. Derived from the local binary pattern, CELBP\+{NT} is robust to illumination, rotation and noise. Its feature extraction process involves the following three steps. First, different patterns in LBP are reclassified to form an enhanced LBP (ELBP) based on their structures and frequencies. Then, in order to describe the local feature completely and sufficiently, the difference of modulus value and center pixel information is added to ELBP to develop a complete ELBP feature, named CELBP. Meanwhile, the adaptive threshold of CELBP is determined by the image size. Finally, CELBP\+{NT} is proposed by using the favorable characteristics of multi-scale analysis on CELBP. The features are evaluated on the popular Outex database with different intensity and different types of noise. Extensive experimental results show that CELBP\+{NT} not only demonstrates better performance to a number of state-of-the-art LBP variants under no-noise condition, but also effectively improves the performance of texture classification containing noise due to its high robustness and distinctiveness.
Parallel Algorithm of Fast Independent Component Analysis for Dimensionality Reduction on Many Integrated Core
Fang Minquan, Zhang Weimin, Zhou Haifang
2016, 53(5):  1136-1146.  doi:10.7544/issn1000-1239.2016.20148080
Asbtract ( 741 )   HTML ( 2)   PDF (3270KB) ( 413 )  
Related Articles | Metrics
There are massive matrix and iterative calculations in fast independent component analysis (FastICA) for hyperspectral image dimensionality reduction. By analyzing hotspots of FastICA algorithm, we design the parallel schemes of covariance matrix calculating, whitening processing and ICA iteration on many integrated core (MIC), implement and validate an M-FastICA algorithm. Further, we present a performance model for M-FastICA. We present a series of optimization methods for the parallel schemes of different hotspots: reforming the arithmetic operations, interchanging and unrolling loops, transposing matrix, using intrinsics and so on. In particular, we propose a novel method to balance the loads when dealing with the lower triangular matrix. Then we measure the performance effects of such optimization methods. Our experiments show that the M-FastICA algorithm can reach a maximum speed-up of 42X times in our test, and it runs 2.2X times faster than the CPU parallel version on 24 cores. We also investigate how the speed-ups change with the bands. The experiment results validate our performance model with an acceptable accuracy and thus can provide a roofline for our optimization effort.
An Efficient Parallel Spectral Element Scheme for Solving Seismic Wave Equation
Lin Deng, Cui Tao, Leng Wei, Zhang Linbo
2016, 53(5):  1147-1155.  doi:10.7544/issn1000-1239.2016.20148440
Asbtract ( 781 )   HTML ( 0)   PDF (2015KB) ( 696 )  
Related Articles | Metrics
Numerical simulation of seismic waves plays an essential role in seismology and seismic exploration. We propose here an efficient parallel spectral element scheme for seismic wave equation with perfectly matched layer (PML). PML is integrated into the seismic wave equation to absorb out-going waves and mimic unbounded domain. Ulteriorly, to enable adapting complex topography and explicit time stepping, the spectral element method (SEM) is used to discretize seismic wave equation with PML, which results in a spectral element scheme. In addition, we demonstrate that element stiffness matrices can be decomposed, which can be used to greatly reduce the storage of stiffness matrix and accelerate stiffness matrix-vector multiplication and thus remarkably speed up the scheme and cut down memory cost. Furthermore, we study several spectral element schemes known and show that our scheme is superior to others in both calculation and storage. Combined with parallel technique, an efficient parallel spectral element solver for seismic wave equation with PML is present. Numerical experiments show that our scheme is correct, well stronglyweakly scalable and of good adaptation to complex topography.
Large-Scale Scalable Parallel Computing Based on LBM with Multiple-Relaxation-Time Model
Liu Zhixiang, Fang Yong, Song Anping, Xu Lei, Wang Xiaowei, Zhou Liping, Zhang Wu
2016, 53(5):  1156-1165.  doi:10.7544/issn1000-1239.2016.20148441
Asbtract ( 1144 )   HTML ( 4)   PDF (3317KB) ( 508 )  
Related Articles | Metrics
In the large-scale numerical simulation of three-dimensional complex flows, the multiple-relaxation-time model (MRT) of lattice Boltzmann method (LBM) has better property of numerical stability than single-relaxation-time model. Based on the turbulence model of large eddy simulation (LES) and the interpolation scheme of surface boundary, three iteration calculations of grid generation, initialization of flow information and parallelism property are analyzed respectively under the discrete velocity model D3Q19. Distributed architecture and the communication between different compute nodes using message passing interface (MPI) are often used by current high performance computing clusters. By considering both the features of distributed clusters and the load balance of calculation and using MPI programming model, the grid generation, initialization of flow information and the parallel algorithm of iteration calculation suitable for large-scale distributed cluster are studied, respectively. The proposed parallel algorithm also can be suitable for D3Q15 discrete velocity model and D3Q27 discrete velocity model. Two different cases, solving problem with fixed total calculation and solving problem with fixed calculate amount in every computing cores, are considered in the process of numerical simulation. The performances of parallelism are analyzed for these two cases, respectively. Experimental results on Sunway Blue Light supercomputer show that the proposed parallel algorithm still has good speedup and scalability on the order of hundreds of thousands of computing cores.
A Parallel Communication Algorithm in Supersonic COIL’s Calculations Using Multiblock Mesh
Guo Hong, Li Yan, An Hengbin
2016, 53(5):  1166-1172.  doi:10.7544/issn1000-1239.2016.20148444
Asbtract ( 525 )   HTML ( 0)   PDF (1396KB) ( 408 )  
Related Articles | Metrics
In this paper, a parallel communication algorithm of supersonic chemical oxygen iodine laser (COIL)’s calculation using multiblock structured mesh has been designed and implemented. This communication algorithm that is designed for large scale supersonic chemical oxygen iodine laser’s simulation is based on JASMIN(J parallel adaptive structured mesh applications infrastructure) infrastructure. There are several communication problems in the large scale supersonic chemical oxygen iodine laser’s simulation such as the complexity of blocks’ connecting relationship description, the complexity of management of boundary conditions and the nonuniform of communication schedule. The communication algorithm in this paper includes a blocks’ relationship recognition algorithm to compute blocks’ connecting relationship automatically, a special data structure to help managing boundary conditions and a unified communication schedule which can reduce communication time. According to our calculation results, the communication problems can be resolved by the communication algorithm in this paper and supersonic chemical oxygen iodine laser based on this communication algorithm can be realized easily and simulated regularly. The simulation with 4.5 million mesh cells in this paper can run efficiently on thousands of processor cores.