Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 June 2020, Volume 57 Issue 6
An Energy Consumption Optimization and Evaluation for Hybrid Cache Based on Reinforcement Learning
Fan Hao, Xu Guangping, Xue Yanbing, Gao Zan, Zhang Hua
2020, 57(6):  1125-1139.  doi:10.7544/issn1000-1239.2020.20200010
Asbtract ( 395 )   HTML ( 1)   PDF (3887KB) ( 312 )  
Related Articles | Metrics
Emerging non-volatile memory STT-RAM has the characteristics of low leakage power, high density, fast read speed, and high write energy. Meanwhile, SRAM has the characteristics of high leakage power, low density, fast read and write speed, low write energy, etc. The hybrid cache of SRAM and STT-RAM fully utilizes the respective advantages of both memory medias, providing lower leakage power and higher cell density than SRAM, higher write speed and lower write energy than STT-RAM. The architecture of hybrid cache mainly achieves both of benefits by putting write-intensive data into SRAM and read-intensive data into STT-RAM. Therefore, how to identify and allocate read-write-intensive data is the key challenge for the hybrid cache design. This paper proposes a cache management method based on the reinforcement learning that uses the write intensity and reuse information of cache access requests to design a cache allocation policy and optimize energy consumption. The key idea is to use the reinforcement learning algorithm to get the weight for the set allocating to SRAM or STT-RAM by learning from the energy consumption of cache line sets. The algorithm allocates a cache line in a set to the region with greater weight. Evaluations show that our proposed policy reduces the average energy consumption by 16.9%(9.7%) in a single-core (quad-core) system compared with the previous policies.
Optimizing Winograd-Based Fast Convolution Algorithm on Phytium Multi-Core CPUs
Wang Qinglin, Li Dongsheng, Mei Songzhu, Lai Zhiquan, Dou Yong
2020, 57(6):  1140-1151.  doi:10.7544/issn1000-1239.2020.20200107
Asbtract ( 232 )   HTML ( 0)   PDF (2411KB) ( 165 )  
Related Articles | Metrics
Convolutional neural networks (CNNs) have been extensively used in artificial intelligence fields such as computer vision and natural language processing. Winograd-based fast convolution algorithms can effectively reduce the computational complexity of convolution operations in CNNs so that they have attracted great attention. With the application of Phytium multi-core CPUs independently developed by the National University of Defense Technology in artificial intelligence fields, there is strong demand of high-performance convolution primitives for Phytium multi-core CPUs. This paper proposes a new high-performance parallel Winograd-based fast convolution algorithm after studying architecture characteristics of Phytium multi-core CPUs and computing characteristics of Winograd-based fast convolution algorithms. The new parallel algorithm does not rely on general matrix multiplication routines, and consists of four stages: kernels transformation, input feature maps transformation, element-wise multiplication, and output feature maps inverse transformation. The data movements in all four stages have been collaboratively optimized to improve memory access performance of the algorithm. The custom data layouts, multi-level parallel data transformation algorithms and multi-level parallel matrix multiplication algorithm have also been proposed to support the optimization above efficiently. The algorithm is tested on two Phytium multi-core CPUs. Compared with Winograd-based fast convolution implementations in ARM Computer Library (ACL) and NNPACK, the algorithm can achieve speedup of 1.05~16.11 times and 1.66~16.90 times, respectively. The application of the algorithm in the open source framework Mxnet improves the forward-propagation performance of the VGG16 network by 3.01~6.79 times.
Efficient Optimization of Graph Computing on High-Throughput Computer
Zhang Chenglong, Cao Huawei, Wang Guobo, Hao Qinfen, Zhang Yang, Ye Xiaochun, Fan Dongrui
2020, 57(6):  1152-1163.  doi:10.7544/issn1000-1239.2020.20200115
Asbtract ( 187 )   HTML ( 0)   PDF (1876KB) ( 141 )  
Related Articles | Metrics
With the rapid development of computing technology, the scale of graph increases explosively and large-scale graph computing has been the focus in recent years. Breadth first search (BFS) is a classic algorithm to solve graph traverse problem. It is the main kernel of Graph500 benchmark that evaluates the performance of supercomputers and servers in terms of data-intensive applications. High-throughput computer (HTC) adopts ARM-based many-core architecture, which has the characteristics of high concurrency, strong real-time, low-power consumption. The optimization of BFS algorithm has made a series of progress on single-node systems. In this paper, we first introduce parallel BFS algorithm and existing optimizations. Then we propose two optimization techniques for HTC to improve the efficiency of data access and data locality. We systematically evaluate the performance of BFS algorithm on HTC. For the Kronecker graph with 2scale=230whose vertices are 230 and edges are 234, the average performance on HTC is 24.26 GTEPS and 1.18 times faster than the two-way x86 server. In terms of energy efficiency, the result on HTC is 181.04 MTEPS/W and rank 2nd place on the June 2019 Green Graph500 big data list. To our best knowledge, this is the first work that evaluates BFS performance on HTC platform. HTC is suitable for data intensive applications such as large-scale graph computing.
Programming and Developing Environment for FPGA Graph Processing: Survey and Exploration
Guo Jinyang, Shao Chuanming, Wang Jing, Li Chao, Zhu Haojin, Guo Minyi
2020, 57(6):  1164-1178.  doi:10.7544/issn1000-1239.2020.20200106
Asbtract ( 535 )   HTML ( 0)   PDF (2346KB) ( 157 )  
Related Articles | Metrics
Due to the advantages of high performance and efficiency, graph processing accelerators based on reconfigurable architecture field programmable gate array (FPGA) have attracted much attention, which satisfy complex graph applications with various basic operations and large-scale of graph data. However, efficient code design for FPGA takes long time, while the existing functional programming environment cannot achieve desirable performance. Thus, the problem of programming wall on FPGA is significant, and has become a serious obstacle when designing the dedicated accelerators. A well-designed programming environment is necessary for the further popularity of FPGA-based graph processing accelerators. A well-designed programming environment calls for convenient application programming interfaces, scalable application programming models, efficient high-level synthesis tools, and a domain-specific language that can integrate software/hardware features and generate high-performance underlying code. In this article, we make a systematic exploration of the programming environment for FPGA graph processing. We mainly introduce and analyze programming models, high-level synthesis, programming languages, and the related hardware frameworks. In addition, we also introduce the domestic and foreign development of FPGA-based graph processing accelerators. Finally, we discuss the open issues and challenges in this specific area.
A Cross-Layer Memory Tracing Toolkit for Big Data Application Based on Spark
Xu Danya, Wang Jing, Wang Li, Zhang Weigong
2020, 57(6):  1179-1190.  doi:10.7544/issn1000-1239.2020.20200109
Asbtract ( 181 )   HTML ( 0)   PDF (2108KB) ( 110 )  
Related Articles | Metrics
Spark has been increasingly employed by industries for big data analytics recently, due to its efficient in-memory distributed programming model. Most existing optimization and analysis tool of Spark perform at either application layer or operating system layer separately, which makes Spark semantics separate from the underlying actions. For example, unknowing the impaction of operating system parameters on performance of Spark layer will lead unknowing of how to use OS parameters to tune system performance. In this paper, we propose SMTT, a new Spark memory tracing toolkit, which establishes the semantics of the upper application and the underlying physical hardware across Spark layer, JVM layer and OS layer. Based on the characteristics of Spark memory, we design the tracking scheme of execution memory and storage memory respectively. Then we analyze the Spark iterative calculation process and execution/storage memory usage by SMTT. The experiment of RDD memory assessment analysis shows our toolkit could be effectively used on performance analysis and provide guides for optimization of Spark memory system.
Performance Optimization of Cache Subsystem in General Purpose Graphics Processing Units: A Survey
Zhang Jun, Xie Jingcheng, Shen Fanfan, Tan Hai, Wang Lümeng, He Yanxiang
2020, 57(6):  1191-1207.  doi:10.7544/issn1000-1239.2020.20200113
Asbtract ( 135 )   HTML ( 1)   PDF (1220KB) ( 146 )  
Related Articles | Metrics
With the development of process technology and the improvement of architecture, the parallel computing performance of GPGPU(general purpose graphics processing units) is updated a lot, which makes GPGPU applied more and more widely in the fields of high performance and high throughput. GPGPU can obtain high parallel computing performance, as it can hide the long latency incurred by the memory accesses via supporting thousands of concurrent threads. Due to the existance of irregular computation and memory access in some applications, the performance of the memory subsystem is affected a lot, especially the contention of the on-chip cache can become serious, and the performance of GPGPU can not be up to the maximum. Alleviating the contention and optimizing the performance of the on-chip cache have become one of the main solutions to the optimization of GPGPU. At present, the studies of the performance optimization of the on-chip cache focus on five aspects, including TLP(thread level parallelism) throttling, memory access reordering, data flux enhancement, LLC(last level cache) optimization, and new architecture design based on NVM(non-volatile memory). This paper mainly discusses the performance optimization research methods of the on-chip cache from these aspects. In the end, some interesting research fields of the on-chip cache optimization in future are discussed. The contents of this paper have important significance on the research of the cache subsystem in GPGPU.
Research Advances in the Interpretability of Deep Learning
Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao
2020, 57(6):  1208-1217.  doi:10.7544/issn1000-1239.2020.20190485
Asbtract ( 645 )   HTML ( 2)   PDF (1226KB) ( 634 )  
Related Articles | Metrics
The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.
Indoor Scene Understanding by Fusing Multi-View RGB-D Image Frames
Li Xiangpan, Zhang Biao, Sun Fengchi, Liu Jie
2020, 57(6):  1218-1226.  doi:10.7544/issn1000-1239.2020.20190578
Asbtract ( 84 )   HTML ( 0)   PDF (2157KB) ( 57 )  
Related Articles | Metrics
For intelligent robots, it’s an important and challenging ability to understand environment correctly, and so, scene understanding becomes a key problem in robotics community. In the future, more and more families will have service robots living with them. Family robots need to sense and understand surrounding environment reliably in an autonomous way, depending on their on-board sensors and scene understanding algorithms. Specifically, a running robot has to recognize various objects and the relations between them to autonomously implement tasks and perform intelligent man-robot interaction. Usually, RGB-D(RGB depth) visual sensors commonly used by robots to capture color and depth information have limited field of view, and so it is often difficult to directly get the single image of the whole scene in large-scale indoor spaces. Fortunately, robots can move to different locations and get more RGB-D images from multiple perspectives which can cover the whole scene in total. In this situation, we propose an indoor scene understanding algorithm based on information fusion of multi-view RGB-D images. This algorithm detects objects and extracts object relationship on single RGB-D image, then detects instance-level objects on multiple RGB-D image frames, and constructs object relation oriented topological map as the model of the whole scene. By dividing the RGB-D images into cells, then extracting color histogram features from the cells, we manage to find and associate the same objects in different frames using the object instance detection algorithm based on the longest common subsequence, overcoming the adverse influence on image fusion caused by RGB-D camera’s viewpoint changes. Finally, the experimental results on the NYUv2 dataset demonstrate the effectiveness of the proposed algorithm.
Agent Trust Boost via Reinforcement Learning DQN
Qi Faxin, Tong Xiangrong, Yu Lei
2020, 57(6):  1227-1238.  doi:10.7544/issn1000-1239.2020.20190403
Asbtract ( 103 )   HTML ( 1)   PDF (2342KB) ( 71 )  
Related Articles | Metrics
Trust recommendation system is an important application of recommendation system based on social network. It combines the trust relationship between users to recommend items to users. However, previous studies generally assume that the trust value between users is fixed, so it is unable to respond to the dynamic changes of user trust and preferences in a timely manner, thus affecting the recommendation effect. In fact, after receiving the recommendation, there is a difference between actual evaluation and expected evaluation which is correlated with trust value. The user’s trust in the recommender will increase when the actual evaluation is higher than expected evaluation, and vice versa. Based on the dynamics of trust and the changing process of trust between users, this paper proposes a trust boost method through reinforcement learning. Least mean square algorithm is used to learn the dynamic impact of evaluation difference on user’s trust. In addition, a reinforcement learning method deep q-learning (DQN) is studied to simulate the process of learning user’s preferences and boosting trust value. Finally, a polynomial level algorithm is proposed to calculate the trust value and recommendation, which can motivate the recommender to learn the user’s preference and keep the user’s trust in the recommender at a high level. Experiments indicate that our method applied to recommendation systems could respond to the changes quickly on user’s preferences. Compared with other methods, our method has better accuracy on recommendation.
Duration-HyTE: A Time-Aware Knowledge Representation Learning Method Based on Duration Modeling
Cui Yuanning, Li Jing, Shen Li, Shen Yang, Qiao Lin, Bo Jue
2020, 57(6):  1239-1251.  doi:10.7544/issn1000-1239.2020.20190253
Asbtract ( 91 )   HTML ( 0)   PDF (2047KB) ( 60 )  
Related Articles | Metrics
Knowledge representation learning is the foundation of knowledge acquisition and reasoning. It is widely used in entity extraction, entity alignment, recommendation system and other fields. It has become an important issue throughout the whole process of knowledge graph construction and application. With the development of large-scale knowledge graphs containing time labels, time-aware knowledge representation learning has become one of the research hotspots in this field in recent years. Traditional time-aware knowledge representation learning methods can not effectively use the distribution of knowledge valid duration. In this paper, we propose an improved time-aware knowledge representation learning method combined hyperplane model and duration modeling to solve this problem. Firstly, we divide meta facts into persistent facts and instantaneous facts according to their valid duration. Then we model the valid duration of knowledge, so we get the calculation method of valid reliability. Finally, we propose a new knowledge representation learning method by improving score function with valid reliability. Wikidata12K and YAGO11K are two knowledge graph data sets containing time labels. We extract two new persistent facts datasets from these two datasets. We do a series of comparative experiments on these four data sets. The results show that Duration-HyTE method of link prediction and time prediction performance has been effectively promoted. Especially on Wikidata12K dataset, the accuracy of link prediction of the head entity and tail entity of the Duration-HyTE method is improved by 25.7% and 35.8% respectively compared with other traditional and advanced knowledge representation methods.
The Construction and Analysis of Classical Chinese Poetry Knowledge Graph
Liu Yutong, Wu Bin, Bai Ting
2020, 57(6):  1252-1268.  doi:10.7544/issn1000-1239.2020.20190641
Asbtract ( 198 )   HTML ( 0)   PDF (1793KB) ( 140 )  
Related Articles | Metrics
Classical Chinese poetry is a precious cultural heritage. It is significant to use the rich information in classical Chinese poetry to further investigate the language, literature and historical development of Chinese culture. However, the knowledge of classical Chinese poetry is highly fragmented. It not only exists in poetry itself, but also is widely distributed in the materials which are used to explain poetry, such as annotations, translations, appreciations, etc. Our aim is to obtain the potential semantic relationship between words and expressions, and use knowledge graph to link them. By doing this, we could integrate fragmented knowledge in a systematic way, which enables us to achieve better reasoning and analysis of classical Chinese poetry knowledge. In this paper, we propose a method to construct classical Chinese poetry knowledge graph (CCP-KG). About building nodes of CCP-KG, we use the improved Apriori algorithm to generate candidate words, then check if the candidate word appears in the annotations to determine when it can be a node of CCP-KG. About building edges of CCP-KG, the semantic relationship between words is established through the annotations, then we use the artificially constructed classical Chinese poetry hierarchical structure to establish the relationship between abstract semantics. Finally, we obtain CCP-KG, which covers every aspect of classical Chinese poetry and contains multi-layer semantic links between words. Taking Tang poetry as an example, CCP-KG can be used to analysis classical Chinese poems in different dimensions. Compared with character-based data analysis, the use of CCP-KG assists literary research more in-depth from the perspective of semantics. Therefore, CCP-KG is necessary in analyzing classical Chinese poems. In addition, CCP-KG can also be applied to various tasks like reasoning and analysis in classical Chinese poetry. We conduct experiments on the tasks of determining the theme of poetry and analyzing the emotion of poetry respectively, showing the effectiveness and application value of our constructed CCP-KG.
Multi-Strategy Solution Space Graph Search Algorithm of Real-Time Ride-Sharing Problem
Guo Yuhan, Zhang Yu, Shen Xueli, Yu Junyu
2020, 57(6):  1269-1283.  doi:10.7544/issn1000-1239.2020.20190484
Asbtract ( 64 )   HTML ( 0)   PDF (2952KB) ( 54 )  
Related Articles | Metrics
Ride-sharing can improve transportation efficiency, alleviate traffic congestion, reduce environmental pollution and save travel costs through decreasing the vehicle vacancy rate of all participants. Firstly, in this paper, a mathematical model is constructed for the real-time ride-sharing problem, where the utilization efficiency of the driver resources in the ride-sharing is evaluated by means of the shared route percentage and the detour distance constraint. Then, a solution space graph theory of discrete permutation problem is proposed and its principle is elaborated and analyzed. Based on this theory, a multi-strategy solution space graph search algorithm is constructed. The algorithm utilizes a parallel structure to generate the value matrix of driver-rider pairs, which greatly improves the efficiency of the algorithm compared with traditional approaches. Several different search operators, designed based on the characteristics of the discrete permutation problem, are manipulated by multiple control strategies proposed in this paper, which guide the search process to move to higher value directions of the solution space graph to obtain high quality matching results efficiently. The experimental results show that the quality of the algorithm can exceed 95% of the optimal solution, and the efficiency of the algorithm is significantly superior than the other algorithms compared in the experiment.
A Distributed Data Reconstruction Algorithm Based on Jacobi ADMM for Compressed Sensing in Sensor Networks
Li Guorui, Meng Jie, Peng Sancheng, Wang Cong
2020, 57(6):  1284-1291.  doi:10.7544/issn1000-1239.2020.20190587
Asbtract ( 67 )   HTML ( 0)   PDF (1843KB) ( 43 )  
Related Articles | Metrics
Considering the application scenario of decentralized data collection in wireless sensor networks (WSNs), a distributed data reconstruction algorithm based on Jacobi ADMM (alternating direction method of multipliers) for compressed sensing is proposed by adopting the JSM-1 (joint sparse model-1) model in the distributed compressed sensing (DCS) theory. Through exchanging the common information among cluster heads to determine the common components in the correlated sensed data and update the innovation components in each cluster head, the compressed sensed data in WSNs are reconstructed in a distributed way. The data collection operation in wireless sensor networks is firstly abstracted as a distributed optimization problem. In order to avoid non-convergence in the distributed data reconstruction process, a proximal component is then introduced into the aforementioned optimization problem with the goal of converting the sub-problem of the optimization objective function into its strictly convex form. After that, the ADMM method is utilized to solve the data reconstruction problem. Both the synthetic dataset and the real world datasets are used in the experiments to verify the performance of the proposed algorithm. Experimental results show that the proposed data reconstruction algorithm can provide higher data reconstruction accuracy than the state of the art data reconstruction algorithms.
HDT: A Heuristic Dynamic Threshold Algorithm to Avoid Reprioritization of LEDBAT
Ma Aman, Jiang Xianliang, Jin Guang
2020, 57(6):  1292-1301.  doi:10.7544/issn1000-1239.2020.20190692
Asbtract ( 55 )   HTML ( 0)   PDF (1365KB) ( 33 )  
Related Articles | Metrics
In recent years, with the significant increase in communication technologies and network transmission capabilities, the requirements of applications have shown a diversified growth trend (such as video conferencing and online games with low latency, low delay jitter and software updates with high throughputs). In order to meet delay-insensitive data transmission and ensure efficient bottleneck bandwidth utilization, low priority congestion control (LPCC) algorithms such as LEDBAT(low extra delay background transport) have received increasing attention. These algorithms can occupy available bandwidth when the link is idle, and release the occupied bandwidth when the link load is high to ensure the transmission of delay-sensitive data. However, when AQM(active queue management) is deployed on the router, the reprioritization problem of LPCC occurs. When the link is under heavy load, the occupied bandwidth cannot be released by LPCC, which degenerates into an ordinary congestion control algorithm. To solve the problem, this paper proposes a heuristic dynamic threshold adjustment algorithm, called HDT which can dynamically search for the optimal dynamic delay threshold, ensuring it maintains low priority while coexisting with AQM without reducing link utilization. We establish different network scenarios in the network simulator NS2 to verify the effectiveness of the proposed algorithm. The results show that HDT can effectively solve the problem of reprioritization while ensuring the bandwidth utilization of the link.
RGNE:A Network Embedding Method for Overlapping Community Detection Based on Rough Granulation
Zhao Xia, Zhang Zehua, Zhang Chenwei, Li Xian
2020, 57(6):  1302-1311.  doi:10.7544/issn1000-1239.2020.20190572
Asbtract ( 68 )   HTML ( 0)   PDF (2006KB) ( 37 )  
Related Articles | Metrics
Community mining of complex information networks is a research hotspot in recent years and the detection of overlapping communities has important practical significance. The traditional community detection method accurately divides all nodes into each subclass to form a non-overlapping partition. However, the hard partitioning method is more difficult to deal with complex situations involving uncertain information and noise information. At present, there are few researches on the method of network embedding for overlapping community detection. Aiming at the problems of community drift and boundary uncertainty, a network embedding community detection method based on rough granulation is proposed. The node representation of structure information and attribute information is obtained through network embedding, and the similar nodes are mapped to the low-dimensional continuous vector space with similar distances. Then, the network structure and multi-level information with rough granulation to deal with the uncertainty areas are considered, and overlapping communities are finally generated. The experimental results in network public datasets and synthetic datasets show that the RGNE(network embedding based on rough granulation)method has higher precision and can effectively deal with community detection problems of uncertain networks. Finally, the parameter settings affecting the experimental results are discussed and analyzed in detail.
NT-EP: A Non-Topology Method for Predicting the Scope of Social Message Propogation
Liu Zitu, Quan Ziwei, Mao Rubai, Liu Yong, Zhu Jinghua
2020, 57(6):  1312-1322.  doi:10.7544/issn1000-1239.2020.20190584
Asbtract ( 88 )   HTML ( 0)   PDF (1296KB) ( 55 )  
Related Articles | Metrics
Predicting the scope of a message accurately in social networks is an important part of public opinion analysis, which has received extensive attention in the field of data mining. Most of the current research mainly uses social network topology and user action logs to predict the spread of social messages. It is usually easy to obtain action log about users in real applications, but the topology of the social network (for example, the friend relationship between users) is not easy to obtain. Therefore, non-topology social message prediction has good prospects for broader applications. In this paper, we propose a new method called NT-EP for predicting the propagation scope of social messages. NT-EP consists of four parts: 1)We construct a weighted propagation graph for each message based on the characteristics of message propagation decay over time, and then use a random walk strategy to obtain multiple propagation paths on the propagation graph; 2)We put multiple propagation paths of the target message into Bi-GRU, and combine the attention mechanism to obtain the propagation feature representation for the target message; 3)We use the gradient descent method to calculate the influence representation about other messages; 4)Combining the propagation feature representation for the target message with the influence representation about other events, we predict the propagation scope of the target message. The experimental results on Sina microblog and Flixster dataset show that our method is superior to existing social event prediction methods in terms of many indicators such as MSE and F1-score.
An Offset Addition Vector Coding Strategy for Supporting Path Tracing Query in RFID-Based Supply Chains
Liao Guoqiong, Yang Lechuan, Zhang Haiyan, Yang Xianpei
2020, 57(6):  1323-1334.  doi:10.7544/issn1000-1239.2020.20190207
Asbtract ( 67 )   HTML ( 0)   PDF (2841KB) ( 46 )  
Related Articles | Metrics
As an important technical support of intelligent Internet of things, radio frequency identification (RFID) technology has been widely used in supply chain tracing and real-time monitoring. In order to improve the tracking and query efficiency of tagged objects in the RFID-based supply chain environment, it is necessary to effectively encode the temporal and spatial data of RFID objects. Considering the characteristics including big volume, the existence of the loop and frequent update of the data in RFID-based supply chains, based on the idea that infinite vectors can be inserted between a pair of vectors, an offset addition vector path coding strategy is proposed. This strategy takes spatiotemporal data nodes as coding objects and assigns a unique pair of vectors to each node by vector addition to achieve uniform coding. At the same time, an optimization strategy is proposed to solve the overflow problem caused by excessive code value, and the correctness of the optimization strategy is proved. The experimental results show that the proposed vector encoding strategy and its optimization strategy can meet the requirements of different types of tracing queries, and have the advantages of fast encoding speed, slow overflow of code value, high update efficiency and can support loop.
Reverse Nearest Neighbor Query Based on New Index Structure
Liu Runtao, Liang Jianchuang
2020, 57(6):  1335-1346.  doi:10.7544/issn1000-1239.2020.20190470
Asbtract ( 72 )   HTML ( 0)   PDF (1852KB) ( 56 )  
Related Articles | Metrics
To improve the efficiency of reverse nearest neighbor query, firstly the definition of the minimum bounding square based on nearest neighbor distance (MBSD) for spatial data and the definitions of four orders for spatial data are given. Then a new spatial data index structure—the index tree based on the minimum bounding square and the distance of nearest neighbor(MBDNN-tree) is proposed according to these definitions. The new index structure uses the idea of dividing spatial data in the R-tree, to construct the nodes on each level of an MBDNN-tree according to nodes’ geometric distribution and the corresponding order relation of nodes, by dividing the original data set in defined orders from top to bottom, and from left to right by using its MBSD to represent data. And then the detailed description and proof analysis are made for the algorithms in the preprocessing and constructing procedures used for creating an MBDNN-tree. The properties of an MSDNN-tree are obtained. Then the pruning rules for reverse neighbor query on an MBDNN-tree are set up, and further the reverse neighbor query algorithm on an MBDNN-tree is presented. The geometric orders among the nodes on the same level of an MBDNN-tree are used to reduce the visited node number of an MBDNN-tree for the reverse neighbor query so that the algorithm’s query efficiency is greatly improved. Finally, the reverse nearest neighbor query algorithm based on this structure is analyzed experimentally. Experiments show that the reverse nearest neighbor query algorithm based on MBDNN- tree has better query performance.