计算机研究与发展
       ISSN 1000-1239   CN 11-1777/TP  
   RSS  Email Alert     
   
 
Nianqi Search Key words Adv search
33
2019 Vol. 56, No. 2
Published: 2019-02-01

   
 
227 Survey on RDMA-Based Distributed Storage Systems Hot!
Chen Youmin, Lu Youyou, Luo Shengmei, Shu Jiwu
DOI: 10.7544/issn1000-1239.2019.20170849
RDMA (remote direct memory access) is being widely used in big data area, which allows local host to access the remote memory without the involvements of remote CPUs, and provides extremely high bandwidth, high throughput and low latency, thus helping to boost the performance of distributed storage systems dramatically. As a whole, the RDMA-enabled distributed storage systems bring new opportunity to the big data processing. In this paper, we firstly point out that simply replacing the network module in distributed systems cannot fully exploit the advantages of RDMA in both semantics and efficiency, and revolutions of storage system design are urgently needed. Then, two key aspects of efficiently using RDMA are illustrated: One is the efficient management of hardware resources, including the careful utilization of NIC an CPU cache, parallel acceleration of multicore CPUs and memory management, and the other is the reformation of the software by closely coupling the software design and RDMA semantics, which uses the new features of RDMA to redesign the data placement schemes, data indexing and distributed protocols. Relative research works of distributed file systems, distributed key-value stores, and distributed transactional systems are introduced to illustrate the above two aspects. Summarizes of the paper, and suggestions for future research are also given at the end of this paper.
2019 Vol. 56 (2): 227-239 [Abstract] ( 571 ) HTML (80 KB)  PDF (3438 KB)  ( 484 )
240 Survey on Accelerating Neural Network with Hardware Hot!
Chen Guilin, Ma Sheng, Guo Yang
DOI: 10.7544/issn1000-1239.2019.20170852
Artificial neural networks are widely used in artificial intelligence applications such as voice assistant, image recognition and natural language processing. With the rise of complexity of the application, the computational complexity has also increased dramatically. The traditional general-purpose processor is limited by the memory bandwidth and energy consumption when dealing with the complex neural network. People began to improve the architecture of the general-purpose processors to support the efficient processing of the neural network. In addition, the development of special-purpose accelerators becomes another way to accelerate processing of neural network. Compared with the general-purpose processor, it has lower energy consumption and higher performance. The article aims to introduce the designs from current general-purpose processors and special-purpose accelerators for supporting the neural network. It also summarizes the latest design innovation and breakthrough of the neural network acceleration platforms. In particular, the article provides an overview of the neural network and discusses the improvements made by various general-purpose chips to support neural networks, which include supporting low-precision operations and adding a calculation module to speed up neural network processing. Then from the viewpoint of the computational structure and storage structure, the article summarizes the customized designs of special-purpose accelerators, and describes the dataflow used by the neural network chips based on the reuse of various types of the data in the neural network. Through analyzing the advantages and disadvantages of these solutions, the article puts forward the future design trend and challenge of the neural network accelerator.
2019 Vol. 56 (2): 240-253 [Abstract] ( 542 ) HTML (101 KB)  PDF (3305 KB)  ( 514 )
254 Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review Hot!
Zhang Kaifeng, Yu Yang
DOI: 10.7544/issn1000-1239.2019.20170578
Motivated by applying reinforcement learning methods into autonomous robotic systems and complex decision making problems, reinforcement learning is becoming more and more popular in the community of machine learning. Traditional reinforcement learning is one kind of learning paradigm in machine learning field which is learning from the interactions between the agent and the environment. However, for the vast majority of cases, the environments for sequential decision making problems cannot provide an explicit reward signal immediately or the reward signal can be much delayed. This becomes the bottleneck for applying reinforcement learning methods into more complex tasks. So inverse reinforcement learning is proposed to recover the reward function from expert demonstrations in the Markov decision process (MDP) by assuming that the expert demonstrations is optimal. So far, the imitation learning algorithms which combines direct reinforcement learning approaches and inverse reinforcement learning approaches have already made a great progress. This paper briefly introduces the basic concepts of reinforcement learning, inverse reinforcement learning and imitation learning. And this paper also gives an introduction to the existing problems concerning with inverse reinforcement learning and some other methods in imitation learning. In addition, we also introduce some existing bottlenecks once applying the above methods into real world applications.
2019 Vol. 56 (2): 254-261 [Abstract] ( 488 ) HTML (86 KB)  PDF (1049 KB)  ( 336 )
262 Active Sampling for Deep Q-Learning Based on TD-error Adaptive Correction
Bai Chenjia, Liu Peng, Zhao Wei, Tang Xianglong
DOI: 10.7544/issn1000-1239.2019.20170812
Deep reinforcement learning (DRL) is one of research hotspots in artificial intelligence. Deep Q-learning is one of the representative achievements of DRL. In some fields, its performance has met or exceeded the level of human expert. It is necessary for training deep Q-learning to acquire lots of samples. These samples are obtained by the interaction between agent and environment. However, it is usually computationally intensive and sometimes impossible to keep away from interaction risk. We propose an active sampling method based on TD-error adaptive correction in order to solve sample efficiency problem in deep Q-learning. In various deep Q-learning methods, the updating of storage priority in experience memory lags behind the updating of Q-network parameters. It causes that a lot of samples are not selected to apply in Q-network training because the storage priority cannot reflect the true distribution of TD-error in experience memory. The TD-error adaptive correction active sampling method proposed in this paper uses the replay periods of samples and Q-network state to establish a priority bias model to estimate the real priority of each sample in experience memory during the Q-network iteration. The samples are selected from experience memory according to the corrected priority and the bias model parameters are adaptively updated by a segmented form. We analyze the complexity of the algorithm and the relationship between learning performance and the order of polynomial feature and updating period of model parameters. Our method is verified on the platform of Atari 2600. The experimental results show that proposed method improves the learning speed and reduces the number of interaction between agent and environment. Meanwhile, it ameliorates the quality of optimal policy.
2019 Vol. 56 (2): 262-280 [Abstract] ( 215 ) HTML (167 KB)  PDF (7042 KB)  ( 194 )
281 Large-Scale Dynamic Network Community Detection by Multi-Objective Evolutionary Clustering
Li He, Yin Ying, Li Yuan, Zhao Yuhai, Wang Guoren
DOI: 10.7544/issn1000-1239.2019.20170751
Evolutionary clustering is often utilized for dynamic network community detection to uncover the evolution of community structure over time. However, it has the following main problems: 1) The absence of error correction may lead to the result-drifting problem and the error accumulation problem; 2) the NP-hardness of modularity based community detection makes it inefficient to get an exact solution. In this paper, an efficient and effective multi-objective method, namely DYN-MODPSO(multi-objective discrete particle swarm optimization for dynamic network), is proposed, where the traditional evolutionary clustering framework and the particle swarm algorithm are modified and enhanced, respectively. The main work of this article is as follows: 1) A novel strategy, namely the recently future reference, is devised for the initial clustering result correction to make the dynamic community detection more effective; 2) the traditional particle swarm algorithm is modified so that it could be effectively integrated with the evolutionary clustering framework; 3) the de-redundancy random walk based initial population generation method is presented to improve the diversity and the initial precision of the individuals; 4) the multi-individual crossover operator and the improved interference operator are developed to enhance the local search and the convergence abilities of DYN-MODPSO. Extensive experiments conducted on the real and the synthetic dynamic networks show that the efficiency and the effectiveness of DYN-MODPSO are significantly better than those of the competitors.
2019 Vol. 56 (2): 281-292 [Abstract] ( 178 ) HTML (96 KB)  PDF (3410 KB)  ( 245 )
293 Deep Semantic Representation of Time-Sync Comments for Videos
Wu Famin, Lü Guangyi, Liu Qi, He Ming, Chang Biao, He Weidong, Zhong Hui, Zhang Le
DOI: 10.7544/issn1000-1239.2019.20170752
With the development of Internet, crowdsourcing short texts such as time-sync comments for videos are of significant importance for online media sharing platforms and leisure industry. It also provides a new research opportunity for the evolution of recommender system, artificial intelligence and so on, which have tremendous values for every walk of life. At the same time, there are many challenges for crowdsourcing short text analysis, because of its high noise, non-standard expressions and latent semantic implication. These have limited the application of traditional natural language processing (NLP) techniques, thus it needs a novel short text understanding method which is of high fault tolerance, and can capture the deep semantics. To this end, this paper proposes a deep semantic representation model based on recurrent neural network (RNN). It can avoid the effect of noise on text segmentation by exploiting the character-based RNN. To achieve the semantic representation, we apply the neural network to represent the latent semantics such that the outputted semantic vectors can deeply reflect the time-sync comments. Then we further design a time-sync comment explanation framework based on semantic retrieval, used for the validation of semantic representation. Finally, we compare them with others baselines, and apply many measures to validate the proposed model. The experimental results show that model can capture the semantics in these short texts more precisely, and the assumptions related to time-sync comments are reasonable.
2019 Vol. 56 (2): 293-305 [Abstract] ( 161 ) HTML (111 KB)  PDF (2705 KB)  ( 167 )
306 Probability Matrix Factorization for Link Prediction Based on Information Fusion
Wang Zhiqiang, Liang Jiye, Li Ru
DOI: 10.7544/issn1000-1239.2019.20170746
As one kind of typical network big data, social-information networks such as Weibo and Twitter include both the complex network structure among users and rich microblog/Tweet information published by users. It is notable that most of the existing methods only make use of the network topological information or the non-topological information for link prediction, but there is still a lack of effective methods by fusing the topological information or non-topological information in social-information networks. A link prediction method is proposed from the perspective of users’ topic by fusing users’ topic similarity in social-information networks. The method goes in accordance with the following sequence: firstly, a topic similarity between users based on users’ topic representation is defined, followed by which a topic similarity-based sparse network is constructed; secondly, the information of the following/followed network and the topic similarity-based network are fused into a unified framework of probabilistic matrix factorization, based on which the latent-feature representation of the network nodes and the linking relation parameters are obtained; finally, the linking probability between network nodes is calculated based on the obtained latent-feature representation and linking relation parameters. The proposed approach provides a general modeling strategy fusing multi-network information and a learning-based solution. Link prediction experiments are conducted on four real network datasets, i.e. Twitter and Weibo. The experimental results demonstrate that the proposed method is more effective than others.
2019 Vol. 56 (2): 306-318 [Abstract] ( 207 ) HTML (128 KB)  PDF (3716 KB)  ( 233 )
319 Multi-Scale Faster-RCNN Algorithm for Small Object Detection
Huang Jipeng, Shi Yinghuan, Gao Yang
DOI: 10.7544/issn1000-1239.2019.20170749
Normally, small object is the object which only covers a small part of a whole image. Compared with regular object, small object has less information and the training data of small object is difficult to be marked. This leads to the poor performance when directly employing the previous object detection methods for small object detection. Moreover, the detection methods designed for small object are often too complex or not generic. In this paper, we propose a small object detection algorithm named multi-scale Faster-RCNN. According to the characteristics of convolutional neural network, the structure of Faster-RCNN is modified, such that the network can integrate both the low-level and high-level features for multi-scale object detection. Through such a manner, the accuracy of small object detection is improved. Simultaneously, with the goal of solving the problem that training data is difficult to be marked, we use training data crawled from search engine to train the model. Because the distribution of crawled data is different from the real test data’s, training images in which objects have high resolution are transformed by means of down sampling and up sampling. It makes the feature distribution of training images and test images more similar. The experiment results show that the mean average precision (mAP) of proposed approach can be up to 5% higher than the original Faster-RCNN’s in the task of small object detection.
2019 Vol. 56 (2): 319-327 [Abstract] ( 473 ) HTML (51 KB)  PDF (3362 KB)  ( 280 )
328 CrowdTracker: Object Tracking Using Mobile Crowd Sensing
Jing Yao, Guo Bin, Chen Huihui, Yue Chaogang, Wang Zhu, Yu Zhiwen
DOI: 10.7544/issn1000-1239.2019.20170808
This paper proposes CrowdTracker, a novel object tracking system based on mobile crowd sensing (MCS). Different from other studies that are based on video surveillance, CrowdTracker recurits people to collaboratively take photos of the object to achieve object movement prediction and tracking. The optimization objective of CrowdTracker is to effectively track the moving object in real time and minimize the cost of user incentives. The incentive is determined by the number of workers assigned and the total distance that workers move to complete the task. In order to achieve the objective, CrowdTracker proposes an algorithm MPRE to predict the object moving pattern, and two task allocation algorithms, namely T-centric and P-centric, are proposed. T-centric selects workers in a task-centric way, while P-centric allocates tasks in a people-centric manner. By analyzing a large number of historical vehicle trajectories, MPRE builds a moving model of vehicle to predict the object’s next position. In the predicted regions, CrowdTracker selects an optimal set of workers for the tracking task by utilizing T-centric or P-centric. Experiments are conducted on a large-scale real-world dataset. The experimental results show that CrowdTracker can effectively track the object in real time and reduce the incentive cost at the same time.
2019 Vol. 56 (2): 328-337 [Abstract] ( 154 ) HTML (73 KB)  PDF (2995 KB)  ( 139 )
338 SegGraph: An Algorithm for Loop-Closure Detection in Outdoor Scenes Using 3D Point Clouds
Liao Ruijie, Yang Shaofa, Meng Wenxia, Dong Chunmei
DOI: 10.7544/issn1000-1239.2019.20180092
We present SegGraph, a new algorithm for loop-closure detection (LCD) for autonomous robots equipped with three-dimensional laser scanners in outdoor scenes such as urban streets. LCD is to check whether the robot has passed a place near where it visited at some point before, and is a key component of a robot’s simultaneous localization and mapping system. Our SegGraph algorithm consists of three steps: 1) partition each of the two input point clouds into point clusters corre-sponding to smooth surfaces, while discarding the ground planes; 2) construct complete weighted graphs from the cluster sets where weights correspond to distances between surface centroids; 3) check if these two graphs contain a sufficiently large common subgraph. The key novelty of SegGraph is that in matching common subgraphs, we mainly compare the distances between corresponding pairs of surface clusters. The rationale is that, due to noise in point cloud data and imperfection of segmentation techniques, different point clouds obtained from nearby places may often be partitioned into drastically different surface segments. However, distances between centroids of these segments tend to be stable across different point clouds. We develope an efficient heuristic randomized algorithm for finding common subgraphs, implement a full LCD algorithm and evaluate it on the publicly available KITTI dataset, which is one of the most widely used. Experimental results demonstrate that our LCD algorithm achieves good accuracy and efficiency.
2019 Vol. 56 (2): 338-348 [Abstract] ( 153 ) HTML (102 KB)  PDF (2426 KB)  ( 154 )
349 Elephant Flow Detection Algorithm Based on Lowest Rate Eviction Integrated with d-Left Hash
Li Chunqiang, Dong Yongqiang, Wu Guoxin
DOI: 10.7544/issn1000-1239.2019.20170732
A small percentage of high rate large-sized flows consume most of the network bandwidth. It is of great significance to efficiently identify these flows for traffic engineering, so as to alleviate network congestion and improve network transport performance. With the development of network technology, the capacity of transmission links and the transfer rate of data flows become higher and higher. So the network equipment with high-speed packets forwarding capability put forward high performance requirements for flows identifying algorithms. The flows whose size and transmission rate both exceed certain thresholds are usually defined as elephant flows. In this paper, a novel algorithm is proposed for elephant flow detection, in which the data structure of flow entries is indexed by d-Left Hash table to meet the performance requirements of high speed packet processing. The proposed detection algorithm combines the d-Left Hash data structure with the eviction of low-rate flows’ entries in order to identify elephant flows efficiently. The accuracy of the proposed detection algorithm is improved by the eviction of low rate flows’ entries. Theoretical analysis is conducted to demonstrate the accuracy, performance and memory overhead of the proposed detection algorithm. Experimental results on real data sets show that the proposed algorithm outperforms CSS, WCSS, S-LRU and L-LRU algorithms in terms of accuracy and performance at comparable memory overhead.
2019 Vol. 56 (2): 349-362 [Abstract] ( 128 ) HTML (103 KB)  PDF (5967 KB)  ( 121 )
363 Traffic Latency Characterization and Fingerprinting in Mobile Cellular Networks
Wei Songjie, Wu Chao, Luo Na, Zhang Gongxuan
DOI: 10.7544/issn1000-1239.2019.20170501
Internet backbone traffic is a complicated mix of various data flows initiated by clients via different network connections, including 3G/4G-based mobile cellular networks and wired broadband networks. Without examining application layer meta-data or inspecting into TCP/IP packet contents, existing network traffic analysis and characterization methods struggle in differentiating traffic flows from these two types of network connections. By studying the different kinds of link layer technics and wireless radio resource control (RRC) mechanisms, the traffic temporal characteristics are analyzed and formalized based on the packet delay variance. By making use of TCP/IP packet’s round-trip time (RTT) calculation, the experiments extract six significant network traffic features related to the packet delay, and apply them to train and test machine-learning classifiers to separate 3G/4G client traffic flows from broadband connection flows. These features focus on the transmission latency caused by a client’s first-hop Internet connection, and reveal the temporal variance of packet distribution from different link flows. Experiments with realistic dataset of mobile application traffic achieve a classification precision of more than 92% with effective traffic coverage and error resilience. The proposed method surpasses other related solutions also by relying on only the temporal distribution of flow packets without needing to inspect the packet content and encapsulation.
2019 Vol. 56 (2): 363-374 [Abstract] ( 118 ) HTML (84 KB)  PDF (3228 KB)  ( 145 )
375 Real Time Rendering of Large Scale Realistic Ocean Scenes Driven by Time and Space
Li Ying, Tang Yong, Zhang Haoran, Liu Ding, Zhou Shengteng, Wang Sai
DOI: 10.7544/issn1000-1239.2019.20170895
Rendering large-scale ocean scenes plays an important role in simulators, movies and other aspects. Because of the complexity of the ocean and the sky, it is difficult to animate large-scale realistic ocean scenes in real time, especially under precise time and space conditions. In this paper, we present a real-time rendering framework for large-scale realistic ocean scenes. In traditional real-time ocean rendering method, the skybox method consisting of a big textured cube with six images is usually used to model the sky for its rapidity and simplicity. However, it has potential problems with seams in the edge of the skybox and it is not flexible enough. In our case, we apply a skysphere method which is convenient to position the celestial bodies and to set up the light scattering model. To show the real movement of celestial bodies, we establish a simplified astronomical model to compute the position of every single celestial body in the scene. When the wind blows over the ocean, the high frequency short wave appears first, then the low frequency long wave grows. As the wave is fully grown, the long wave will be more prominent. Researchers in graphics always focus on rendering the long wave while ignoring the short wave. We apply a unified directional spectrum for long and short wind-driven waves to draw the waves, which covers the shortage of the short wave rendering. The ocean illumination is a difficult problem for computer graphics, because both the ocean and the light source are dynamic. Via the analysis of the real ocean illumination, we take the atmospheric scattering, the ocean surface reflection and the ocean subsurface scattering as a whole, and build up a comprehensive ocean lighting rendering model. With this method, we can make the ocean waves alive, and enrich the optical effects colorfully while simulating the large-scale ocean scenes under precise time and space conditions. We demonstrate the visual realism and performance of our method in various experiments, achieving high frame rates on different desktop PCs.
2019 Vol. 56 (2): 375-384 [Abstract] ( 121 ) HTML (68 KB)  PDF (3333 KB)  ( 145 )
385 Inferring Ambient Occlusion from a Single Image
Guo Yuxiao, Chen Leiting, Dong Yue
DOI: 10.7544/issn1000-1239.2019.20170897
Ambient occlusion has been widely used in many graphics and vision tasks for approxi-mating low frequency global illumination, removing inter-reflection, and inferring scene geometry. Existing methods need either scene geometry or scene appearances under different lightings to compute the ambient occlusion of the scene, which limits the application of these methods in many real applications. In this paper, we present a new method for computing the ambient occlusion from a single image taken under unknown natural lighting. Our method needn’t any scene geometry or illumination information. To this end, we utilize a convolutional neural network (CNN), trained on a large amount of synthetic data, to directly recovery the pre-pixel ambient occlusion. We propose three network structures for ambient occlusion estimation, a cascade one and a parallel one that are based on previous CNN solution for intrinsic images, as well as an end-to-end neural network. We analyze their performance and demonstrate that comparing with parallel and cascade designs, the end-to-end design could achieve the best performance. We also valid the efficiency and effectiveness of our method on both synthetic and real images. It is not only faster, but also more accurate than previous methods.
2019 Vol. 56 (2): 385-393 [Abstract] ( 102 ) HTML (63 KB)  PDF (2410 KB)  ( 132 )
394 A Semantic Similarity Integration Method for Software Feature Location Problem
He Yun, Li Tong, Wang Wei, Li Xiang, Lan Wei
DOI: 10.7544/issn1000-1239.2019.20180103
Feature is an executable function entity that’s defined in software system. The process of identifying the mapping relationship between the software features and source code is called feature location. Information retrieval feature location method is widely used in software maintenance, code search and other fields because of its high usability and low overhead. All the information retrieval feature location methods are based on semantic similarity calculation. However, there are two main problems: 1) There is a lot of noise data in the source code corpus. The noise data will interfere with the result of similarity calculation. 2) Different index methods’ limitation will lead to the similarity calculation results being inaccurate. To solve these problems, a semantic similarity integration method for software feature location problem is proposed. This method introduces the Part-of-Speech filtering in the preprocessing process, effectively filtering the source code noise data, and improving the accuracy of similarity calculation. Then, different index methods are integrated to calculate similarities based on the source code’s structured characteristics. Experiments are performed on the open data benchmarks. Compared with the existing methods, the POS filtering improves by an average of 30.88% on the mean reciprocal rank performance, while similarity integration improves an average of 10.28%. The experimental result verifies the effectiveness of the proposed methods.
2019 Vol. 56 (2): 394-409 [Abstract] ( 103 ) HTML (120 KB)  PDF (4733 KB)  ( 114 )
410 A Coprocessor for Double-Precision Floating-Point Matrix Multiplication
Jia Xun, Wu Guiming, Xie Xianghui, Wu Dong
DOI: 10.7544/issn1000-1239.2019.20170908
Matrix multiplication has been widely used in various application fields, especially the field of numerical computation. However, double-precision floating-point matrix multiplication suffers from non-optimal performance or efficiency on contemporary computing platforms, including CPU, GPGPU and FPGA. To address this problem, acceleration of double-precision floating-point matrix multiplication with a customized coprocessor is proposed in this paper, which adopts linear array as the basic building block. Firstly, double-buffering technique and optimized memory scheduling are applied to the basic linear array for better computation efficiency. Then, architecture of the matrix multiplication coprocessor and coprocessor-based accelerated computing system are formulated. Furthermore, a performance model tailored for the coprocessor is developed and the design space of coprocessor is explored in detail. Finally, functional correctness of the coprocessor is verified and its hardware implementation cost under mainstream technology node is evaluated. Experimental results show that the proposed coprocessor can achieve the performance of 3 TFLOPS and the efficiency of 99%. Compared with NVIDIA K40 GPGPU for executing double-precision floating-point matrix multiplication, the coprocessor proposed in this paper achieves 1.95× performance with hardware overheads of only 21.05% in area. This work explores the application of customized acceleration in high-performance computing and has certain guidance for improving performance of existing computing systems.
2019 Vol. 56 (2): 410-420 [Abstract] ( 74 ) HTML (93 KB)  PDF (2925 KB)  ( 105 )
421 Dynamic Binary Translation and Instrumentation Based Function Call Tracing
Lu Shuaibing, Zhang Ming, Lin Zhechao, Li Hu, Kuang Xiaohui, Zhao Gang
DOI: 10.7544/issn1000-1239.2019.20170657
Dynamic function call tracing is one of the most important techniques for Linux kernel analysis. Existing tools suffer from the problems of insufficiently supporting instruction set architectures(ISA) and low efficiency. We design and implement a function call tracing tool to support multiple ISAs with high efficiency. Firstly, we use the binary translation system to load the kernel image and recognize the branch instruction types. Secondly, we design different instrumentation code based on different kinds of ISAs and insert instrumentation code during the translation stage to get timestamps, process IDs, thread IDs and function addresses during the kernel booting and runtime. Finally, when the kernel boots up and the shell appears, we process all the information and generate function call maps. Based on binary translation, we analyze the text, symbol and string sections of the binary image, without any source code. Enriched intermediate code and extra target code are compatible with optimization algorithms like block chain, redundant code elimination and hot path optimization, which reduces the performance overhead. The core algorithm is to design the instrumentation code and get corresponding information based on different ISAs. It is easy to implement and to migrate to multiple ISAs. Experiments on QEMU and Linux 4.9 kernel show that the traced information is accordance with the source code while instrumentation code brings about 15.24% and information processing generates 165.59% overhead of original QEMU, which is much faster than existing tools.
2019 Vol. 56 (2): 421-430 [Abstract] ( 91 ) HTML (55 KB)  PDF (2500 KB)  ( 111 )
431 Consistency Based Iterating Models in Graph Computing
Sun Rujun, Zhang Lufei, Hao Ziyu, Chen Zuoning
DOI: 10.7544/issn1000-1239.2019.20170902
The time and space complexity of many accurate algorithms is difficult to meet the realistic demands, while approximating algorithms are alternative choices. Iterative computing is an effective approximating method in numerical computing. A variety of algorithms and models can be classified into it. With the increase of data scale, iterative algorithms are blooming and developing. Graph computing is a natural way to express and analyze relationships. There are numerous graph algorithms being described as iterative models. Parallel iterating is regular in large graph computing. Graph iterating methods have different parallel execution models. Most of the existing parallel graph computing implementations are synchronous, and a few of them are asynchronous models. However, there are few studies about consistency constraints in graph iterating. In this paper, we discuss the iterative computing technique in graph computing model. We analyze the applicability of synchronous and asynchronous iterations, and study the asynchronous iterative methods under different consistency, as well as experimental proving. We propose an adaptive asynchronous execution model which is weakly consistent. It overcomes the shortcomings of existing asynchronous iterative methods. Experiments of this model were done in parallel and have shown that the model can effectively improve some graph algorithms, especially the iterating and converging speed.
2019 Vol. 56 (2): 431-441 [Abstract] ( 99 ) HTML (70 KB)  PDF (3100 KB)  ( 117 )
442 Fully Outsourced Attribute-Based Encryption with Verifiability for Cloud Storage
Zhao Zhiyuan, Wang Jianhua, Xu Kaiyong, Guo Songhui
DOI: 10.7544/issn1000-1239.2019.20170883
Attribute-based encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms in the cloud storage environment. However, the computation cost of most ABE schemes is considerably expensive during key generation, encryption and decryption phases. And the computation cost, which grows with the complexity of the access policy or the attribute set, is becoming critical barriers in applications running on resource-limited devices. Aiming at tackling the challenge above, a fully outsourced ciphertext-policy attribute-based encryption scheme with verifiability is proposed in this paper. The proposed scheme can achieve outsourced key generation, encryption and decryption simultaneously. In the proposed scheme, heavy computations are outsourced to public cloud service providers, and no complex operations are left for the attribute authority, data owner and data user. At the same time, the scheme can verify the correctness of the computing result in an efficient way, which is very important. The proposed scheme is proven to be indistinguishable against chosen plaintext attack secure under the random oracle model and is provided with verifiable proof. Finally, the results of theoretical analysis and experimental simulation show that the proposed scheme has advantages in function and efficiency, and it is more suitable for practical applications.
2019 Vol. 56 (2): 442-452 [Abstract] ( 146 ) HTML (129 KB)  PDF (2637 KB)  ( 152 )
计算机研究与发展
·
·
·
·
·
·
·F5000
·
More....  
More....  
EI Village
China Computer Federation
Institute of Computing, Chinese Academy of Sciences
More....  
 
 
Copyright © Editorial Board of Journal of Computer Research and Development
Supported by:Beijing Magtech