Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 July 2019, Volume 56 Issue 7
Location Prediction Model Based on Transportation Mode and Semantic Trajectory
Zhang Jinglei, Shi Hailong, Cui Li
2019, 56(7):  1357-1369.  doi:10.7544/issn1000-1239.2019.20170662
Asbtract ( 900 )   HTML ( 7)   PDF (3120KB) ( 618 )  
Related Articles | Metrics
The research of existing location prediction technologies focuses on the mining and analysis of trajectory data, but there still exists space for research that how to improve the location prediction result with mining the information contained in trajectory data and exogenous data. In this paper, we propose a new location prediction model of mining the semantic trajectory and the transportation mode. On one hand, this model firstly mines the similar users according to the semantic trajectory, then establishes the frequent pattern set combined with the individual semantic trajectory and location trajectory of similar users, and finally obtains the candidate future location prediction set; On the other hand, it recognizes the future transportation mode, then combines the history transportation mode and historical location trajectory to predict the future location set with building Markov model. Finally the prediction result will be obtained with these two candidate sets. This method not only uses the semantic trajectory to mine the behavior of similar users, but also combines the transportation mode to overcome the limitation of location trajectory. The experimental result shows that the accuracy of this model can reach 86%, and 5% higher than that of the unmatched travel model under different frequent pattern support with the daily trajectory data. Therefore, it is effective to improve the location prediction result with this model.
Cross-Media Clustering by Share and Private Information Maximization
Yan Xiaoqiang, Ye Yangdong
2019, 56(7):  1370-1382.  doi:10.7544/issn1000-1239.2019.20180470
Asbtract ( 374 )   HTML ( 8)   PDF (2434KB) ( 217 )  
Related Articles | Metrics
Recently, the rapid emergence of cross media data with typical multi-source and heterogeneous characteristic brings great challenges to the traditional data analysis approaches. However, the most of existing approaches for cross media data heavily rely on the shared latent feature space to construct the relationships between multiple modalities, while ignoring the private information hidden in each modality. Aiming at this problem, this paper proposes a novel share and private information maximization (SPIM) algorithm for cross media data clustering, which leverages the shared and private information into the clustering process. Firstly, we present two shared information construction models: 1) Hybrid words (H-words) model. In this model, the low-level features in each modality are transformed into words or visual words co-occurrence vector, then a novel agglomerative information maximization is presented to build the hybrid word space for all modalities, which ensures the statistical correlation between the low-level features of multiple modalities. 2) Clustering ensemble (CE) model. This model adopts the mutual information to measure the similarity between the clustering partitions of different modalities, which ensures the semantic correlation of the high-level clustering partitions. Secondly, SPIM algorithm integrates the shared information of multiple modalities and the private information of individual modalities into a unified objective function. Finally, the optimization of SPIM algorithm is performed by a sequential “draw-and-merge” procedure, which guarantees the function converge to a local maximum. The experimental results on 6 cross media datasets show that the proposed approach compares favorably with the existing state-of-the-art cross-media clustering methods.
A Classification Method of Scientific Collaborator Potential Prediction Based on Ensemble Learning
Ai Ke, Ma Guoshuai, Yang Kaikai, Qian Yuhua
2019, 56(7):  1383-1395.  doi:10.7544/issn1000-1239.2019.20180641
Asbtract ( 492 )   HTML ( 5)   PDF (6241KB) ( 295 )  
Related Articles | Metrics
Scientific cooperation is a very important form of academic achievement. Many high-level researches are achieved through cooperation. Researching the collaboration potential can provide guidance for scholars to choose collaborators and maximize the efficiency of scientific research. However, the current outbursts of big data have hindered the effective choice of collaborators. In order to solve the problem, based on scholar-paper big data, after features analysis and optimization and comprehensively considering individual attributes and related attributes of scholars' papers, institutions, research interests, etc., sample features from various dimensions such as paper title, paper rank, paper number, time and coauthor order are constructed. Taking journal or conference level of papers as the sample tags of collaborators sequence pairs, which indicates the potential of current cooperators and make use of the strong learning characteristics of the ensemble methods, a scientific collaborator potential prediction model based on ensemble learning classification method is proposed. After analyzing and constructing the feature set that corresponds to the problem of scientific collaborator potential prediction, classification method is adopted to solve the problem. In experiments, the accuracy, recall rate, and F1 score are much higher than those of traditional machine learning methods and can converge to high values (above 80%) with few samples and little time, indicating the superiority of the proposed model.
A Method of Minimality-Checking of Diagnosis Based on Subset Consistency Detection
Tian Naiyu, Ouyang Dantong, Liu Meng, Zhang Liming
2019, 56(7):  1396-1407.  doi:10.7544/issn1000-1239.2019.20180192
Asbtract ( 379 )   HTML ( 3)   PDF (664KB) ( 185 )  
Related Articles | Metrics
Model-based diagnosis is an intelligent inference technology in order to overcome the serious defects of the first generation of diagnostic system. With the consistent development of relevant work, it is a significant branch of AI at present. However, most of the researches focus on the process of finding out the diagnosis. The process of detecting the diagnosis ensures the minimality of the final solution. It is also a crucial step in the problem. The traditional process of minimality-checking of diagnosis is to compare the new diagnosis with the ones in the existing diagnosis set, checking whether there is a superset or subset of the new diagnosis. The disadvantage of the traditional process is that as the number of diagnosis increases, the difficulty of detection increases gradually, and the time-consuming increases. To solve the problem, we propose a new method of minimality-checking of diagnosis based on subset consistency detection: subset consistency detection (SCD) method. Avoiding the influence of increasing the diagnosis set size, we determine the minimality of diagnosis through the consistency detection of a few subsets of the diagnosis. Our method can be applied to many efficient diagnostic algorithms such as grouped diagnosis (GD) and abstract circuit diagnosis (ACDIAG), and the efficiency of the algorithms is improved by SCD method.
The Analysis and Prediction of Spatial-Temporal Talent Mobility Patterns
Xu Huang, Yu Zhiwen, Guo Bin, Wang Zhu
2019, 56(7):  1408-1419.  doi:10.7544/issn1000-1239.2019.20180674
Asbtract ( 586 )   HTML ( 9)   PDF (3594KB) ( 309 )  
Related Articles | Metrics
With the development of economic globalization, the exchange of talents among cities has become increasingly frequent. Brain drain and brain gain have had a tremendous impact on the development of technology and the economy. An in-depth study of the regularities of talent mobility is the basis for the monitoring of talent exchange and the formulation of a scientific talent flow policy. To this end, in this paper, we propose a data-driven talent mobility analysis method to study the patterns of talent exchange among cities and to forecast the future mobility. Specifically, we leverage a data structure named talent mobility matrix sequence, to represent and mine the temporal-spatial patterns of inter-regional talent mobility. The comparison of attractiveness for talents among different cities is analyzed based on the talent flows. Further, we propose a talent flow prediction model based on the combination of both convolution and recurrent neural networks to forecast regional talent flows. Theoretically, the model can alleviate the data sparsity problem as well as reduce the scale of parameters compared with traditional regression models. The model was validated by a large scale of data collected from an online professional network. Experimental results show that the proposed model reduces the error by 15% on average compared with benchmark models.
Multi-Objective Evolutionary Sparse Recovery Approach Based on Adaptive Local Search
Liu Haolin, Chi Jinlong, Deng Qingyong, Peng Xin, Pei Tingrui
2019, 56(7):  1420-1431.  doi:10.7544/issn1000-1239.2019.20180557
Asbtract ( 547 )   HTML ( 1)   PDF (5471KB) ( 251 )  
Related Articles | Metrics
In sparse recovery, a regularization parameter is usually introduced to aggregate the measurement error term and the sparsity term into a single function, but it is hard to balance them, and this weakness usually leads to low precision of sparse recovery. To solve this problem, a new evolutionary multi-objective approach based on adaptive local search method is proposed in this paper. First, two gradient iterative soft thresholding local search methods based on l\-1 norm and l\-{1/2} norm are designed to obtain corresponding solutions, and they can improve the convergence speed and accuracy of the solutions. Second, the winner solution is selected by comparing the corresponding objective function values in each round. Then, based on the competition success rate, the winner local search method is chosen adaptively to generate latter solutions. Finally, the optimal solution is derived by the angle-based method on the keen region of Pareto front. Experiments show that the measurement error and the sparsity terms can be balanced and our proposed method gains an advantage over the other eight single objective algorithms in terms of recovery accuracy. Compared with the StEMO algorithm, our approach can improve more than 33.8% when the measurement dimension M=600, 82.7% when the noise intensity δ=0.002, and 7.38% when the sparsity ratio K/N=0.3.
Joint Drug Entities and Relations Extraction Based on Neural Networks
Cao Mingyu, Yang Zhihao, Luo Ling, Lin Hongfei, Wang Jian
2019, 56(7):  1432-1440.  doi:10.7544/issn1000-1239.2019.20180714
Asbtract ( 531 )   HTML ( 10)   PDF (1701KB) ( 512 )  
Related Articles | Metrics
Drug entities and relations extraction can accelerate biomedical research, and they are also the basis for further building a biomedical knowledge base and other researches. Traditionally, the pipeline method was used to tackle this problem. This method identifies entities in the paper by NER (named entity recognition) firstly, and then handles RC (relation classification) on each entity pair. The pipline method has three problems. The first is error propagation problem. In detail, the wrong NER results will lead to the wrong relation classification results. The remaining two problems are that it ignores the interaction between two subtasks and the interaction between different relations in the sentence. Considering these problems, this article proposes a joint drug entities and relations extraction method based on neural networks. This method employs a new tagging scheme which represents both entity and relation information by the tags and converts the joint extraction task to a tagging problem. This method applies word embedding and character embedding as input, and extracts drug entities and relations with BiLSTM-CRF model. The results shows that, on DDI (drug-drug interactions) 2013 corpus, this method achieves 89.9% F-score for NER and 67.3% F-score for RE (relations extraction) which is better than the pipeline method using the same model.
A Study of Using TEE on Edge Computing
Ning Zhenyu, Zhang Fengwei, Shi Weisong
2019, 56(7):  1441-1453.  doi:10.7544/issn1000-1239.2019.20180522
Asbtract ( 1108 )   HTML ( 23)   PDF (1219KB) ( 655 )  
Related Articles | Metrics
The concept of edge computing introduces a new emerging computing model that mitigates the high latency caused by the data transmission in the traditional cloud computing model and helps to keep the privacy-or security-sensitive data confidential. However, the security of the execution environment on the edge nodes is still a non-negligible concern that threatens the whole computing model. Recently, hardware vendors design dedicated trusted execution environments (TEEs) on different platforms, and integrating these TEEs to the edge nodes would be efficient to secure the computation on these nodes. In this paper, we investigate a variety of popular TEEs on the traditional computing model and discuss the pros and cons of each TEE based on recent research. Moreover, we further study two popular TEEs-Intel software guard extensions (SGX) and ARM TrustZone technology, and conduct comprehensive performance and security analysis on an Intel Fog Node Reference Architecture platform and an ARM Juno development board, respectively. The analysis results show that using these hardware-assisted TEEs on edge computing platforms produces low overhead while achieving higher security. The discussion on the security challenges of the TEEs is also presented to help improve the reliability of these TEEs and edge computing.
A Blind Watermark Decoder in Nonsubsampled Shearlet Domain Using Bivariate Weibull Distribution
Niu Panpan, Wang Xiangyang, Yang Siyu, Wen Taotao, Yang Hongying
2019, 56(7):  1454-1469.  doi:10.7544/issn1000-1239.2019.20180278
Asbtract ( 373 )   HTML ( 5)   PDF (10229KB) ( 211 )  
Related Articles | Metrics
Digital image watermarking has become a necessity in many applications such as data authentication, broadcast monitoring on the Internet and ownership identification. There are three indispensable, yet contradictory requirements for a watermarking scheme: perceptual transparency, watermark capacity, and robustness against attacks. Therefore, a watermarking scheme should provide a trade-off among these requirements from the information-theoretic perspective. Improving the ability of imperceptibility, watermark capacity, and robustness at the same time has been a challenge for all image watermarking algorithms. In this paper, we propose a novel digital image watermark decoder in the nonsubsampled Shearlet transform (NSST) domain, wherein a PDF (probability density function) based on the bivariate Weibull distribution is used. In the presented scheme, we construct the nonlinear monotone function based adaptive high-order watermark embedding strength functions by employing the human visual system (HVS) properties, and embed watermark data into the singular values of high entropy NSST coefficients blocks. At the watermark receiver, the singular values of high entropy NSST coefficients blocks are firstly modeled by employing the bivariate Weibull distribution according to their inter-scale dependencies, then the statistical model parameters of bivariate Weibull distribution are estimated effectively, and finally a blind watermark extraction approach is developed using the maximum likelihood method based on the bivariate Weibull distribution. The experimental results show that the proposed blind watermark decoder is superior to other decoders in terms of imperceptibility and robustness.
Critical Memory Data Access Monitor Based on Dynamic Strategy Learning
Feng Xinyue, Yang Qiusong, Shi Lin, Wang Qing, Li Mingshu
2019, 56(7):  1470-1487.  doi:10.7544/issn1000-1239.2019.20180577
Asbtract ( 507 )   HTML ( 1)   PDF (1478KB) ( 154 )  
Related Articles | Metrics
VMM-based approaches have been widely adopted to monitor fine-grained memory accessing behavior through intercepting safety-critical memory accessing and critical instructions executing. However, intercepting memory accessing operations lead to significant performance overhead as CPU control travels to VMM frequently. Some existing approaches have been proposed to resolve the performance problem by centralizing safety critical data to given memory regions. However, these approaches need to modify the source code or binary file of the monitored system, and cannot change monitoring strategies during runtime. As a result, the application scenarios are limited. To reduce the performance overhead of monitoring memory access in this paper, we propose an approach, named DynMon, which controls safety-critical data access monitoring dynamically according to system runtime states. It does not dependent on source code and need not to modify binary file of the monitored systems. DynMon obtains dynamic monitor strategies by learning from historical data automatically. With system runtime status and monitor strategies, DynMon decides memory access monitoring region dynamically at runtime. As a result, DynMon can alleviate system performance burden by reducing safety irrelevant region monitoring. The evaluations prove that it can alleviate 22.23% performance cost compared with no dynamic monitor strategy. Besides, the performance overhead will not increase significantly with large numbers of monitored data.
Lightweight Format-Preserving Encryption Algorithm Oriented to Number
Liu Botao, Peng Changgen, Wu Ruixue, Ding Hongfa, Xie Mingming
2019, 56(7):  1488-1497.  doi:10.7544/issn1000-1239.2019.20180745
Asbtract ( 827 )   HTML ( 9)   PDF (956KB) ( 374 )  
Related Articles | Metrics
The Internet of things (IoT), which has been widespread and large-scale applied, arises more and more security and privacy issues. Lightweight encryption is an important measurement for ensuring confidentiality for devices of IoT, in where the computing, storage and energy resources are always limited. However, the shallow application of lightweight block cipher will change the format of ciphertext tremendously due to confusion and diffusion operations. These changes make the ciphertext inconsistent with plaintext in expressive form and format, and lead to require extra storage, computation and redisplay resources. Lightweight format-preserving encryption algorithm can ensure data confidentiality while maintaining the format consistency between ciphertext and plaintext, and these features benefit to IoT greatly. Aiming at the problems that the traditional format-preserving encryption algorithm performs inefficiently, consumes many resources, and cannot encrypt length numeric data, a lightweight format-preserving encryption algorithm oriented to number is proposed in this work. Firstly, a numeric typed permutation table is constructed by using lightweight block cipher algorithm; then the numerical plaintext is added to the key of lightweight block cipher in one-to-one correspondence, and the modulo 10 operation is performed; at last, replacement cryptographic operation is performed to obtain the numerical ciphertext by using the proposed the numeric typed replacement table. The proposed algorithm preserves the format for any numerical data with arbitrary length, and it’s also consistent with the original lightweight block cipher in terms of efficiency and security. By comparing with traditional format-preserving encryption, the experimental result shows that the proposed algorithm is more security, more efficient and more lowly lower resource-consuming. It is suitable for secure storage and data marking of numerical data in resource-constrained environment devices of IoT.
Two-Layer Reversible Watermarking Algorithm Using Difference Expansion
Su Wengui, Shen Yulong, Wang Xiang
2019, 56(7):  1498-1505.  doi:10.7544/issn1000-1239.2019.20180736
Asbtract ( 435 )   HTML ( 1)   PDF (1485KB) ( 260 )  
Related Articles | Metrics
Traditional difference expansion algorithm expands the difference between two adjacent pixels and embeds one bit of secret data to the expanded difference of the pixels of each pair if no overflow or underflow occurs. It can achieve an embedding rate up to 0.5bpp. The shortcoming of this algorithm is that it cannot provide higher embedding rate while keeping distortion low. To achieve higher capability, multiple-layer embedding is required for traditional difference expansion reversible watermarking. However, repetitive embedding of images in the same way does not effectively exploit the characteristics of difference expansion and the correlation among pixels, which will result in large image distortion. To achieve better capacity and superior performance, a novel difference expansion-based algorithm which enables two-layer embedding and pixel pair selection is proposed. The cover image is firstly divided into pairs of pixels. By analyzing the modification mechanism of pixel pairs in difference expansion, a different paring manner for each embedding is developed to better exploit the correlation among different pixel pairs. Furthermore, the mean of pixel pair is utilized as a predictor to select smooth pixel pairs for embedding, so that a higher peak signal-to-noise ratio can be achieved even when the embedding rate exceeds 0.5bpp. Experimental results verify that the proposed algorithm provides higher embedding capacity while maintaining lower distortion in image quality.
Negation and Speculation Scope Detection in Chinese
Ye Jing, Zou Bowei, Hong Yu, Shen Longxiang, Zhu Qiaoming, Zhou Guodong
2019, 56(7):  1506-1516.  doi:10.7544/issn1000-1239.2019.20180725
Asbtract ( 423 )   HTML ( 6)   PDF (971KB) ( 348 )  
Related Articles | Metrics
There are a great deal of negative and speculative expressions in natural language texts. Identifying such information and separating them from the affirmative content plays a critical role in a variety of downstream applications of natural language processing, such as information extraction, information retrieval, and sentiment analysis. Compared with that in English, current research on negative and speculative scope detection for Chinese is scarce. In this paper, we come up with a fusion model based on bidirectional long-term memory (BiLSTM) networks and conditional random fields (CRF), and recast the scope detection problem as a sequence-labeling task. Given a negative or speculative keyword, we need to identify its semantic scope in sentence. This model can learn not only the forward and backward context information by LSTM networks but also the dependency relationship between the output labels via a CRF layer, which is motivated by the superiority of sequential architecture in effectively encoding order information and long-range context dependency. The experimental results on CNeSp corpus show the effectiveness of our proposed model. On the financial dataset, our approach achieves the performance of 79.16% and 76.79% with the improvements of 25.06% and 34.46% for negation and speculation, respectively, compared with the state-of-the-art.
User Intent Classification Based on IndRNN-Attention
Zhang Zhichang, Zhang Zhenwen, Zhang Zhiman
2019, 56(7):  1517-1524.  doi:10.7544/issn1000-1239.2019.20180648
Asbtract ( 827 )   HTML ( 24)   PDF (1710KB) ( 560 )  
Related Articles | Metrics
Recently, with the development of big data and deep learning techniques, human-computer dialogue technology has been emerged as a hot topic, which has attracted the attention from academia and industry. Massive application products based on human-computer dialogue technology appear in our lives and bring us great convenience, such as Apple Siri, Microsoft Cortana, and Huawei smart speaker. However, how to make dialogue system identify and understand user intent more accurately, is still a great challenge. This paper therefore proposes a novel method named IndRNN-Attention based on independently recurrent neural network (IndRNN) and word-level attention mechanism for user intent classification problem. Firstly, we encode the user input message text through Multi-layer IndRNN. Secondly, we use word-level attention mechanism to improve the contribution of domain-related words to encode user input message text, and generate final representation vector of the user input message text. Finally, we classify this representation vector through softmax layer and output classification result. We not only introduce the IndRNN in our approach to solve the problems of gradient vanishing and gradient explosion, but also integrate word-level attention mechanism to improve the quality of text representation. Experimental results show that the proposed IndRNN-Attention approach achieves 0.93 F\-{macro} value on the user intent classification task and outperforms the state-of-the-art approaches significantly.
Chinese Text Extraction Method of Natural Scene Images Based on City Monitoring
Xiao Ke, Dai Shun, He Yunhua, Sun Limin
2019, 56(7):  1525-1533.  doi:10.7544/issn1000-1239.2019.20180543
Asbtract ( 522 )   HTML ( 14)   PDF (2525KB) ( 301 )  
Related Articles | Metrics
Efficient environment monitoring and information analysis in urban scenes has become one of primary tasks of smart cities. In smart cities, the recognition of text information in scene images, especially the extraction of Chinese text in scene images, is an intuitive and efficient method for analyzing scene information. However, the Chinese text extraction of the current scene images fails to achieve good results because of the uneven illumination and blurred images. In addition, the complexity of Chinese character structure is also an important factor affecting the Chinese text extraction. In order to solve this problem, this paper proposes an edge enhanced maximally stable extremal regions (MSER) detection method, which can extract the MSER under the conditions of illumination and blurring influence, and the non-MSER can be efficiently filtered by geometric feature constraints to obtain high quality candidate MSER. Then the proposed central aggregation is used to aggregate the candidate Chinese text field that has been divided into multiple MSER, so that the candidate region becomes a single candidate Chinese text component, and then these components are analyzed, and finally the correct Chinese text is selected by machine learning. Experiments show that the algorithm can extract Chinese text in natural scene images more effectively.
A Fair Distribution Strategy Based on Shared Fair and Time-Varying Resource Demand
Li Jie, Zhang Jing, Li Weidong, Zhang Xuejie
2019, 56(7):  1534-1544.  doi:10.7544/issn1000-1239.2019.20180798
Asbtract ( 540 )   HTML ( 6)   PDF (1445KB) ( 142 )  
Related Articles | Metrics
It is critical to allocate multiple types of resources efficiently and fairly in a cloud computing system. Allocating computing and storage resources through resource sharing has emerged as an effective way to improve the resources utilization. While in reality users’ resource requirements may change at any time, previous work has studied mostly based on the premise that the number of tasks required by users is unlimited and the demand does not change. In order to solve the resource allocation problem that users have limited time-varying resource requirements, we propose a multi-resource fair distribution mechanism based on the concept of resource sharing fairness. Firstly, on the conceptual level, we develop a linear programming model according to users’ dynamic limited tasks resource requirements and the amount of resources shared by users. This mechanism is further proved which satisfies four significant fairness properties: Sharing incentive, Pareto efficiency, Envy fairness, Truthfulness. Secondly, on the specific allocation problem, a heuristic algorithm is proposed. This algorithm is designed by the concept of user sharing coefficient, which can ensure the fairness of distribution and the user does not share loss. The theoretical and experimental results show that the proposed resource allocation mechanism achieves good results in ensuring the fairness of user resource allocation and ensuring high resource utilization when users propose multiple sets of time-varying resource requirements.
An Ultra Lightweight Container that Maximizes Memory Sharing and Minimizes the Runtime Environment
Zhang Liqing, Guo Dong, Wu Shaoling, Cui Haibo, Wang Wei
2019, 56(7):  1545-1555.  doi:10.7544/issn1000-1239.2019.20180511
Asbtract ( 431 )   HTML ( 11)   PDF (1901KB) ( 414 )  
Related Articles | Metrics
The rise of container technology has brought about profound changes in the data center, and a large number of software has been transferred to micro-service deployment and delivery. Therefore, it is of a broad practical significance to optimize the startup, operation and maintenance of large-scale containers in massive user environment. At present, the mainstream container technology represented by Docker has achieved great success, but there is still much room for improvement in image volume and resource sharing. We comb the development process of virtualization technology, and clarify that lightweight virtualization technology is the future research direction, which is very important for data-sensitive applications. By establishing a library file sharing model, we explored the impact of the degree of library files sharing on the maximum number of containers that can be launched. We present an ultra-lightweight container design that minimizes the container runtime environment supporting application execution by refining the granularity of operational resources. At the same time, we extract the library files and the executable binary files into a single layer, which realizes the maximum sharing of the host’s memory resources among containers. Then, according to the above scheme, we implement an ultra-lightweight container management engine: REG (runtime environment generation), and a REG-based workflow is defined. Finally, we carry out a series of comparative experiments on mirror volume, startup speed, memory usage, container startup storm, etc., and verify the effectiveness of the proposed method in the large-scale container environment.
Regional Ocean Model Parallel Optimization in “Sunway TaihuLight”
Wu Qi, Ni Yufang, Huang Xiaomeng
2019, 56(7):  1556-1566.  doi:10.7544/issn1000-1239.2019.20180791
Asbtract ( 582 )   HTML ( 6)   PDF (1194KB) ( 402 )  
Related Articles | Metrics
As an important component of earth system modeling, the ocean model plays a vital role in many fields. It is not only an indispensable scientific research method for studying oceans, estuaries and coasts, but also the forecasting system based on the ocean model can predict typhoons and tsunami in real time. In order to simulate more fine-grained oceanic changes, the ocean model is moving toward higher resolution and more physical parameterization schemes, and general computers are no longer able to meet their needs. As heat dissipation and power consumption become the major bottlenecks of general-purpose processors, multi-core, many cores, and the resulting heterogeneous platform has become the main trend of next generation of supercomputers, which provides a solid foundation for developing high-resolution ocean models. Based on the domestic supercomputer “Sunway TaihuLight”, this paper takes the advantages of its heterogeneous many-core architecture to transplant and optimize the regional ocean model: Princeton ocean model (POM), and fully utilizes the characteristics and advantages of the domestic heterogeneous many-core platform. By using master-slave core collaboration, the high-resolution ocean model swPOM increases the performance efficiency by about 13 times compared with the pure master core and about 2.8 times compared with the general Intel platform, and can scale up to 250 000 cores to provide sufficient support for real-time forecasting system.
A Massively Parallel Bayesian Approach to Factorization-Based Analysis of Big Time Series Data
Gao Tengfei, Liu Yongyan, Tang Yunbo, Zhang Lei, Chen Dan
2019, 56(7):  1567-1577.  doi:10.7544/issn1000-1239.2019.20180792
Asbtract ( 517 )   HTML ( 7)   PDF (1712KB) ( 268 )  
Related Articles | Metrics
Big time series data record the evolvement of a complex system(s) in large temporal and spatial scales with great details of the interactions amongst different parts of the system. Extracting the latent low-dimensional factors plays a crucial role in examining the overall mechanism of the underlying complex system(s). Research challenges arise with the lack of a priori knowledge, and most conventional factorization methods are not able to adapt to the ultra-high dimension and scales of the big data. Aiming at the grand challenge, this study develops a massively parallel Bayesian approach (G-BF) to factorization-based analysis of tensors formed by massive time series. The approach relies on a Bayesian algorithm to derive the factor matrices in the absence of a priori information. Then the algorithm has been mapped to the compute unified device architecture (CUDA) model to update the factor matrices in a massively parallel manner. The proposed approach is designed to support factorization of tensors of arbitrary dimensions. Experimental results indicated that 1) In comparison with GPU-hierarchical alternative least square (G-HALS), G-BF exhibits much better runtime performance and the superiority becomes more obvious with the increasing data scale; 2)G-BF has excellent scalability in terms of both data volume and rank; 3)Applying G-BF to the existing framework for fusing sub-factors (hierarchical-parallel factor analysis,H-PARAFAC), it becomes possible to factorize a huge tensor (volume up to 10\+{11} over two nodes) as a whole with the capability two magnitudes higher than conventional methods.
Performance Optimization of Lustre File System Based on Reinforcement Learning
Zhang Wentao, Wang Lu, Cheng Yaodong
2019, 56(7):  1578-1586.  doi:10.7544/issn1000-1239.2019.20180797
Asbtract ( 667 )   HTML ( 8)   PDF (751KB) ( 300 )  
Related Articles | Metrics
Computing of high energy physics is a typical data-intensive application. The throughput and response time of distributed storage system are key performance indicators, and they are often the targets of performance optimization. There are a large number of parameters that can be adjusted in a distributed storage system. The setting of these parameters has great influence on the performance of the system. At present, these parameters are either set with static values or automatically tuned by some heuristic rules defined by experienced administrators. Neither of the method is optimistic taking into account the diversity of data access patterns and hardware capabilities, and the difficulty of finding heuristic rules for hundreds of interacted parameters based on human experience. In fact, if the tuning engine is regarded as an agent and the storage system is regarded as the environment, the parameter adjustment problem of the storage system can be treated as a typical sequential decision problem. Therefore, based on data access characteristics of high energy physics calculation, we propose an automated parameter tuning method using the reinforcement learning. Experiments show that in the same test environment, using the default parameters of the Lustre file system as a baseline, this method can increase the throughput by about 30%.