Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 April 2018, Volume 55 Issue 4
Network Function Virtualization Technology Research
Zhou Weilin, Yang Yuan, Xu Mingwei
2018, 55(4):  675-688.  doi:10.7544/issn1000-1239.2018.20170937
Asbtract ( 1990 )   HTML ( 15)   PDF (2839KB) ( 1396 )  
Related Articles | Metrics
Network function virtualization (NFV) is a technology that realizes network functions based on virtualization technology and standard commercial servers, switches, and storage, so as to substitute traditional middleboxes that are dedicated devices. NFV is proposed to save construction and operation cost for network service providers. NFV can also improve flexibility and scalability of network services, and can motivate development and deployment of new network functions. Currently, NFV is still under developing, and problems remain in system performance, management and orchestration, reliability, availability, security, and programmability. There have been a number of studies to address these problems. In this paper, we review the NFV architecture and basic technologies, and summarize the key problems to be solved. Then, we categorize the existing studies by a “four quadrants” method. We analyze and compare a number of typical solutions, and summarize the advantages and overheads. Finally, we discuss the research trends of NFV.
High Performance Load Balancing Mechanism for Network Function Virtualization
Wang Yuwei, Liu Min, Ma Cheng, Li Pengfei
2018, 55(4):  689-703.  doi:10.7544/issn1000-1239.2018.20170923
Asbtract ( 778 )   HTML ( 1)   PDF (4709KB) ( 740 )  
Related Articles | Metrics
Network load balancer is an important middle box component for NFV(network function virtualization) which can realize load distribution accurately, through which telecom operators can improve the business ability and extend capacity flexibly and efficiently. In this paper, we design and implement a high performance NFV-oriented network load balancing mechanism and system based on DPDK framework which is capable of running on common virtualization platforms. HVLB is designed based on the SDN (software defined networking) principle which splits the data forwarding and scheduling strategy effectively. It realizes the multi-cores and multi-queues based data processing architecture in user space, and ensures the data access isolation and task handling balance meanwhile. It uses the comprehensive abilities including network transmitting and computing ability to select the objective NF and forward the packet, which can guarantee the network performance and deliver the data packet accurately at the same time. Evaluations based on a prototype of KVM platform show that our mechanism significantly improves the performance of packet processing and forwarding compared with the LVS, and it also obtains the line speed processing with 64 byte UDP packet.
Real-Time Link Fault Detection as a Service for Datacenter Network
Wang Junxiao, Qi Heng, Li Keqiu, Zhou Xiaobo
2018, 55(4):  704-716.  doi:10.7544/issn1000-1239.2018.20170941
Asbtract ( 1041 )   HTML ( 1)   PDF (3319KB) ( 730 )  
Related Articles | Metrics
In large scale datacenter network, link fault detection is an important way to guarantee network connectivity and the performance of large-scale online applications. Currently, the function of link fault detection is provided by middlebox or switches. With the development of software defined networking (SDN) and network function virtualization (NFV), many network functions are decoupled from the hardware devices, while being deployed in the cloud as services. However, the existing methods of link fault detection face some challenges, such as time consuming, high usage of bandwidth, and server overload. To tackle these challenges, we first analyze the existing work on link fault detection. Then we propose the concept of probe matrix and the probe matrix optimization based link fault detection method. We also design a service framework by combining the link fault detection controller and the SDN controller. Finally, the simulation results show that the proposed method significantly outperforms the existing work in detection period, usages of bandwidth and server CPU with tolerable computational overheads for probe matrix optimization.
DrawerPipe: A Reconfigurable Packet Processing Pipeline for FPGA
Li Junnan, Yang Xiangrui, Sun Zhigang
2018, 55(4):  717-728.  doi:10.7544/issn1000-1239.2018.20170927
Asbtract ( 651 )   HTML ( 2)   PDF (3237KB) ( 498 )  
Related Articles | Metrics
In public cloud, flexible network functions are required to enforce network isolation, service-level agreement and security for multi-tenants. While software-based network functions are flexible, they have limited capacity with low processing throughput and induce high latency. FPGA has good programmability and high processing throughput, and it is appealing due to the balance between hardware performance and software flexibility. However, how to use FPGA to realize network function lacks a unified and reconfigurable architecture. This paper presents DrawerPipe, a reconfigurable pipeline model. This module abstracts the packet processing into five standard “drawers”. And operators can load their modules in these “drawers” which are combined as a packet processing pipeline. As the drawers are independent from each other, the modules loaded in different drawers can be excurted in parallel. Furthermore, we add a function-independent programmable interface between modules to adapt the communication format between different modules, which also helps to release the constraint imposed by the interface definition. Finally, we implement a variety of network functions based on DrawerPipe. The result shows that DrawerPipe not only has good scalability, but also has the advantages of wire-speed processing performance and high resource utilization, which can be used for rapid deployment of network functions.
Packets Transmission with Multiple Levels of Credibility and Routing Calculation Based on Virtual Topologies
Chen Wenlong, Zhao Yirong, Xiao Rong, Tang Xiaolan, Xu Ke
2018, 55(4):  729-737.  doi:10.7544/issn1000-1239.2018.20170946
Asbtract ( 486 )   HTML ( 3)   PDF (1977KB) ( 505 )  
Related Articles | Metrics
The credibility of routers and forwarding paths in the Internet has been a popular topic. Not only network equipment of different vendors, but also the same one in different management environments has different credibility. The network flows with diverse credibility requirements are supposed to be transmitted along paths with the corresponding credibility levels (CR). In this paper, the credible transmission mechanism with multiple levels (CETML) is proposed, and the fundamental credible management strategies are suggested. Both routers and IP prefixes are associated with a CR, and the CR of a network flow is obtained according to its source and destination IP addresses. CETML constructs different virtual topologies for every transmission network with different CR, and insures that IP packets is forwarded by the routers whose CR is not less than the CR of these packets. Because the forwarding entries include multiple next hops, a small quantity of additional memory overhead is introduced in CETML. Analyzing the relevancy of the multi-level virtual topologies, we design a new routing calculating method based on Floyd algorithm in SDN environment. All the routing tables of virtual topologies can be achieved during the process of successive iterating calculation. Compared with current typical routing algorithms, the calculation time of CETML is significantly reduced.
Delay-Aware Resource Scheduling Optimization in Network Function Virtualization
Xu Ran, Wang Wendong, Gong Xiangyang, Que Xirong
2018, 55(4):  738-747.  doi:10.7544/issn1000-1239.2018.20170926
Asbtract ( 629 )   HTML ( 6)   PDF (3483KB) ( 990 )  
Related Articles | Metrics
Network function virtualization (NFV) aims to implement software-enable network functions so as to replace proprietary hardware devices in traditional networks. In response to the growing need for intensive resource, the software-oriented network functions bring issues such as the management of virtual network functions, low-latency scheduling, and efficient allocation of virtual network resources. The virtual network function scheduling problem itself is NP-hard. In order to ensure a good user experience on the specific issues of scheduling delays for virtual network function resources, it is necessary to ensure that network resources are reasonably allocated to prevent over-provisioning of resources and to maintain low end-to-end latency. In this paper,we formulate the problem as an integer linear programming model. In addition, to meet the characteristics of high dynamic network, we design a heuristic algorithm based on greedy. This algorithm firstly constructs the auxiliary graph, and then selects the node resources according to the analysis of the delay impact among different service chains, which considers the influence of network propagation delay and adopts multi-path transmission scheme for the multipoint processing function. The final experimental results show that our algorithm can effectively solve the model and reduce the overall network service latency by 5% to 15% compared with the previous research.
A Hierarchical Method for Survivable Service Function Chain Embedding
Liu Yi, Zhang Hongqi, Yang Yingjie, Chang Dexian
2018, 55(4):  748-767.  doi:10.7544/issn1000-1239.2018.20170938
Asbtract ( 575 )   HTML ( 3)   PDF (3734KB) ( 473 )  
Related Articles | Metrics
Aiming at embedding the service function chain (SFC) in substrate network under the condition of single-node or single-link failure, this paper proposes a hierarchical method for survivable SFC embedding. The method takes into account both improving the survivability of SFC and reducing the resource consumption of substrate network. For the key service function chain (KSFC) delivering essential services, it pre-allocates backup resources for possible failures in the future. Meanwhile, for the normal service function chain (NSFC) delivering common services, it quickly re-embeds the failed nodes and links after a failure has occurred. Firstly, taking into account minimizing the service latency of SFC, we provide two mixed integer linear programming (MILP) formulations for the survivable embedding problem of KSFC and NSFC. Secondly, we propose two heuristic algorithms to solve the formulations quickly, namely, the primary-backup service path building algorithm (PBSP-Bld) for the KSFC and the failed service path restoring algorithm (FSP-Res) for the NSFC. The PBSP-Bld not only embeds nodes and links alternately based on greedy strategy to reduce the service latency of SFC, but also establishes bridging path between the primary and the backup service paths, which improves the speed of path switching and reduces the packet loss rate during path switching. Additionally, the FSP-Res finds the best re-embedding positions for failed nodes by means of the max-flow problem, which increases the number of recovered NSFCs. It also uses the modified version of Dijkstras shortest path algorithm to select re-embedding paths with low latency. Experimental results demonstrate the good performance of the heuristic algorithms under different network conditions. Moreover, the proposed method keeps the success ratio of running SFC above 592% under the simulated network environment.
A Backup and Recovery Mechanism for Security Service Chain Fault in Network Function Virtualization Environment
Huang Rui, Zhang Hongqi, Chang Dexian
2018, 55(4):  768-781.  doi:10.7544/issn1000-1239.2018.20170942
Asbtract ( 627 )   HTML ( 0)   PDF (3758KB) ( 549 )  
Related Articles | Metrics
Considering the fault problem of security service chain (SSC) in a network function virtualization (NFV) environment, this paper proposes a backup and recovery mechanism for SSC based on proportional resource reservation. According to the proactive processing idea, it divides the resource proportionally in a substrate network and constructs a candidate set for each substrate node/link beforehand. When the node fault suddenly occurs, it chooses the fault recovery targets in the candidate set and allocates the reserved backup resources. It solves the node fault remapping problem immediately by using the improved discrete particle swarm optimization (DPSO) algorithm, decreasing the occupancy of resources while increasing the repair rate. When the link fault suddenly occurs, it redirects the affected traffic to the available links in the candidate set by changing the traffic splitting-rate of the substrate path. We design the dynamical path splitting algorithm to solve the link fault redirect problem effectively, maximizing the residual value of the underlying substrate network resources. The simulation experiment verifies the proposed algorithm from two aspects: one is the adaptability under different substrate network environments and the other is the validity under different fault models. In addition, we also make a preliminary explore to the appropriate value of the main proportion for the impact of our proposed backup and recovery mechanism.
A Method of Computing Minimal Diagnoses Based on Pseudo-Failure-Degree to Create New Enumeration Tree
Ouyang Dantong, Zhi Huayun, Liu Bowen, Zhang Liming, Zhang Yonggang
2018, 55(4):  782-790.  doi:10.7544/issn1000-1239.2018.20170016
Asbtract ( 650 )   HTML ( 0)   PDF (1291KB) ( 468 )  
Related Articles | Metrics
Model-based diagnosis is a challenging problem in the field of artificial intelligence.In recent years, the SAT solver has evolved rapidly, which has been applied to solving the problems of model-based diagnosis and achieved significant results.By the in-depth study of model-based-diagnosis algorithm LLBRS-Tree,we put forward the concept of component static pseudo-failure-degree and dynamic pseudo-failure-degree according to the topology information of circuit elements, the difference between observed behavior and expected behavior of the system, and the characteristics of set enumeration tree. First,the static pseudo-failure-degrees of all components are calculated.And then the new enumeration tree can be generated by reordering the components from large to small with the static pseudo-failure-degree.When the new minimal diagnose is found, the dynamic pseudo-failure-degrees of the related components are updated and the new enumeration tree is dynamically created.A large number of redundant solutions can be deleted and the number of times to call SAT solver is reduced greatly, so it is faster to find all the minimal diagnoses. Experimental results show that the presented DYN-Tree algorithm runs faster than LLBRS-Tree algorithm with the increasing of the number of components and the increasing of the minimal diagnoses length in the diagnosis system.
Computing the Minimal Hitting Sets with Dynamic Maximum Element Coverage Value
Deng Zhaoyong, Ouyang Dantong, Geng Xuena, Liu Jie
2018, 55(4):  791-801.  doi:10.7544/issn1000-1239.2018.20160900
Asbtract ( 699 )   HTML ( 1345)   PDF (2052KB) ( 541 )  
Related Articles | Metrics
The minimal conflict set cluster is constructed by all the minimal conflict sets. The minimal hitting sets of all the minimal conflict sets are the faults of the system to be diagnosed. Therefore, computing all the minimal hitting sets according to the minimal conflict set cluster is an important step in model based diagnosis. In this paper, a new algorithm is proposed to obtain all the minimal hitting sets by using dynamic maximum element coverage value. The algorithm processes elements in turn according to the descending order of the element coverage value. The search space can be greatly reduced by joining the corresponding heuristic strategy and pruning strategy in the process of solving the hitting sets. Adjacency list is used to store the minimal conflict set cluster in the algorithm. Among the basic storage structures, adjacency list has better space cost than matrix. Besides, the adjacent pointer can quickly find the elements in the minimal conflict set cluster which can be covered by an element. When a hitting set is obtained, the minimal hitting set is obtained by using the minimal hitting set decision rule. Therefore, the algorithm can generate and only generate all the minimal hitting sets. Experimental results show that the proposed algorithm has better computational efficiency than other algorithms.
Intuitionistic Fuzzy Entropy Feature Selection Algorithm Based on Adaptive Neighborhood Space Rough Set Model
Yao Sheng, Xu Feng, Zhao Peng, Ji Xia
2018, 55(4):  802-814.  doi:10.7544/issn1000-1239.2018.20160919
Asbtract ( 857 )   HTML ( 2)   PDF (3795KB) ( 548 )  
Related Articles | Metrics
Feature selection is a very vital technology in data preprocessing.In this method, some most effective features are mainly selected from the features of original data sets, which is aimed to reduce the dimension of data sets.Accordingly, the performance of the learning algorithm can be improved.In the feature selection algorithms based on the neighborhood rough set model, without considering data uneven distribution, there currently exist some defects in the neighborhood of object.To solve this problem of data uneven distribution, variance is adapted to measure the distribution of the data, and the binary neighborhood space is redefined, then the rough set model of the adaptive binary neighborhood space is proposed according to this binary neighborhood space.As well as, the new rough set model of the adaptive binary neighborhood space is combined with the neighborhood intuitionistic fuzzy entropy as the method of the evaluation of features, and then the corresponding feature selection algorithm is also constructed.The experimental results of UCI show that the proposed intuitionistic fuzzy entropy feature selection algorithm can select smaller feature subsets which have higher accuracy of classification, at the same time, the intuitionistic fuzzy entropy feature selection algorithm based on adaptive neighborhood space rough set model also has less time consumption.Therefore, the proposed feature selection algorithm has stronger superiority in this paper.
Anomaly Detection Algorithm of Data Center Network Based on LSDB
Xu Gang, Wang Zhan, Zang Dawei, An Xuejun
2018, 55(4):  815-830.  doi:10.7544/issn1000-1239.2018.20160970
Asbtract ( 854 )   HTML ( 3)   PDF (4617KB) ( 636 )  
Related Articles | Metrics
At present, due to network attack, network configuration and etc, routing table changes frequently in data center network. However, because of the lack of effective monitoring software and routing anomalies, route flapping is difficult to find and locate the fault. When data center network failure occurs, we can’t locate the problem and it will lead to extend the time to repair, degrade the user experience and reduce operating incoming and etc. This paper analyzes the current mainstream data center network architecture, communication protocol and routing calculation principle, then we proposes LSAP which is an data center network anomaly detection method based on the link state database. According to the comparison of snapshots in LSDB and RIB, it can find abnormal link, abnormal routing and locate the fault by collecting link state database, using the improved routing algorithm to calculate the whole network routing. Based on the large data analysis platform, LSAP can compute routing table in real time, achieve processing millions of routing information in seconds and meet the requirements of the current data center for the analysis rate. Through the deployment of the trial in the data center, LSAP can quickly restore routing table, find the topology change and make statistical analysis of all the changes in the route. It has little change to the network, and doesn’t affect the stability of the network, so it applies to the data center with higher stability requirements.
Route Prediction Method for Network Intrusion Using Absorbing Markov Chain
Hu Hao, Liu Yuling, Zhang Hongqi, Yang Yingjie, Ye Runguo
2018, 55(4):  831-845.  doi:10.7544/issn1000-1239.2018.20170087
Asbtract ( 791 )   HTML ( 3)   PDF (3170KB) ( 568 )  
Related Articles | Metrics
Predictions of network intrusion intention and path are very significant for the security administrator to comprehend the possible threat behaviors of attackers deeply. Existing reports mainly focus on the path prediction under the ideal attack scenario. However, the ideal attack paths are not the real-world paths adopted by the intruders entirely. In order to predict the attack path information of network intrusion accurately and comprehensively, a novel route prediction method based on absorbing Markov chain (AMC) is proposed in this paper. Firstly, a normalization algorithm for state transition probability of AMC is designed with the Markov and absorption properties, then the complete attack graph (AG) proved can be mapped into the AMC. In addition, the probability metric for state transition based on common vulnerability scoring system (CVSS) is designed. Finally, the detailed steps for predicting expected number of visits to attack state and expected number of route lengths are further put forward respectively. Experimental analysis results indicate that our method can quantify the probability distribution of routes with different attack lengths, and calculate the expected number of route lengths. Moreover, it can predict the expected number of atomic attacks needed to compromise the attack goal. The predictions can be used in node threat ranking. Hence, our approach provides more guidance for network security protection in response to network attack threat timely.
Multi-Authority Attribute-Based Encryption Scheme with Privacy Protection
Yan Xixi, Liu Yuan, Li Zichen, Tang Yongli
2018, 55(4):  846-853.  doi:10.7544/issn1000-1239.2018.20161043
Asbtract ( 846 )   HTML ( 6)   PDF (1119KB) ( 519 )  
Related Articles | Metrics
Attribute based encryption (ABE) is a new cryptographic technique which guarantees fine-grained access control of outsourced encrypted data in the cloud. In order to protect the users’ sensitive information in the cloud, a multi-authority attribute based encryption (MA-ABE) scheme with privacy protection is proposed. In the scheme, the users’ attribute is divided into two parts: the attribute name and the attribute value. The value of user’s attributes would be hidden in the access structure to prevent from revealing to any third parties, so the users’ privacy will be effectively preserved. In addition, the attribute name is used to construct the access structure, and the length of our ciphertext is associated with the number of attribute name which belongs to the access policy, rather than the all attributes in the system. Besides, the scheme is secure against chosen plaintext attack under the decision bilinear Diffie-Hellman (DBDH) assumption in the standard model. Compared with the existing related schemes, the size of ciphertext and users’ secret key in the scheme are all reduced, and the lower computing cost and storage cost makes the scheme more effective in the practical application, especially the condition in which the scale of user attributes is far smaller than the scale of system attributes.
A Novel VoIP Steganography Method Based on Bayesian Network and Matrix Embedding
Gao Zhanzhan, Tang Guangming, Wang Shuo
2018, 55(4):  854-863.  doi:10.7544/issn1000-1239.2018.20161042
Asbtract ( 603 )   HTML ( 4)   PDF (4041KB) ( 375 )  
Related Articles | Metrics
Voice over IP (VoIP) call has become our common choice of communication with each other. Compared with traditional steganographic covers such as texts and images, VoIP data flow has better imperceptibility and larger space for secret message embedding. Thus, VoIP steganography has received increasing attention in recent years. So far, the research mainly revolves around the designing of embedding and extracting process. However, the detection resistance of existing methods still needs to be strengthened. And they are lacking in the guidance of steganographic security theory. This paper firstly analyzes time sequence characteristics of speech frames, then defines the steganographic security in the form of relative entropy based on Bayesian network model. By analysing the speech encoding process, a specific Bayesian network model for fixed codebook parameters is established and the parameters are divided into binary cover elements and ternary cover elements. To minimize the change number in the embedding process, matrix embedding is used to determine the change positions in the cover vector. To reduce embedding impacts on the statistical properties of the cover, change directions of the ternary cover elements are determined by minimizing the security measure. Experimental results show that under the premise of limited computational complexity, the novel VoIP steganography method leads to good perceptual transparency and outperforms prior methods in resisting blind steganalysis.
Memcached Optimization on High Performance I/O Technology
An Zhongqi, Du Hao, Li Qiang, Huo Zhigang, Ma Jie
2018, 55(4):  864-874.  doi:10.7544/issn1000-1239.2018.20160890
Asbtract ( 644 )   HTML ( 1)   PDF (4541KB) ( 467 )  
Related Articles | Metrics
Existing in-memory object caching systems are bottlenecked by the latency overhead of traditional Ethernet and the limited DRAM amount within the servers. Modern high-performance I/O technologies such as RDMA and NVMe provide a promising solution to address such challenges. In this paper, we focus on the data plane efficiency of in-memory object caching systems and undertake a study on the widely deployed Memcached for fast message transfer and cost-effective storage extension based on high-performance I/O. First, the communication protocol is re-designed on RDMA semantics, and different strategies are applied according to the Memcached operation type and message payload size for optimal overall latency. Second, Memcached is altered to incorporate the NVMe SSDs to expand storage capacity. A circular log structure is adopted to manage the two-level hierarchy of DRAM and SSD. The SSD is directly accessed from the user-space to reduce software overhead. Finally, a JVM-enabled caching system named U2cache is presented. U2cache significantly improves the performance by bypassing both the OS kernel and the JVM runtime. The latency is further hidden through pipelining and overlapping of memory copy, RDMA transfer and SSD access. Benchmarking results indicate that U2cache achieves near-optimal performance of the underlying RDMA interconnect. Performance is further improved by 20% with careful optimization for transferring large messages. For accessing data located in SSD, the latency is reduced by up to 31% compared with the kernel-based I/O.
Porting and Optimizing GTC-P on TaihuLight Supercomputer with OpenACC
Wang Yichao, Lin Xinhua, Cai Linjin, Tang William, Ethier Stephane, Wang Bei, See Simon, Satoshi Matsuoka
2018, 55(4):  875-884.  doi:10.7544/issn1000-1239.2018.20160871
Asbtract ( 824 )   HTML ( 6)   PDF (2339KB) ( 361 )  
Related Articles | Metrics
Sunway TaihuLight with its sustainable performance achieving 93PFLOPS is now the No.1 supercomputer in the latest Top500 list. It provides a high-level directive language called OpenACC that is compatible with OpenACC 2.0 standard with some customized extensions. GTC-P is a discovery-science-capable real-world application code based on the particle-in-cell (PIC) algorithm that is well-established in the HPC area. Our motivation is to port GTC-P code on TaihuLight supercomputer with OpenACC. Since the Sunway OpenACC compiler cannot deal with the performance bottleneck of GTC-P at present when it is directly ported onto TaihuLight, we have applied three optimizations on an “intermediate” version of the code generated by the compiler: 1) elimination of atomic operations; 2) avoidance of expensive global memory access instructions; 3) addition of SIMD intrinsics manually. The results from our numerical experiments show that these optimizations produce 1.6X and 8.6X speed-up on 64 CPE cores compared with a 1 MPE core for the key charge and push kernel PIC operations respectively. Overall, this accelerator makes the entire GTC-P code faster by a factor of 2.5X. Our findings demonstrate that manual optimizations on the “intermediate” code are important for achieving significant improved performance of PIC applications on TaihuLight with OpenACC.
A Direct Send Image Compositing Algorithm with Minimal Communication Costs
Wang Pan, Yang Pingli, Huang Shaohua, Lin Chengdi, Kong Longxing
2018, 55(4):  885-892.  doi:10.7544/issn1000-1239.2018.20160875
Asbtract ( 456 )   HTML ( 1)   PDF (2637KB) ( 282 )  
Related Articles | Metrics
Sort-last is the most widely used method for large scale parallel visualization, and the bottleneck of sort-last method is the image compositing stage. Direct Send is the cornerstone for all other compositing algorithms, so it makes a lot of sense to improve its performance for accelerating image composition. To minimize the communication cost in image compositing, we propose a new type of Direct Send method. Compared with the static partition strategy of the traditional Direct Send, our method is dynamic and adaptive, and it is composed of two phrases: Firstly, we compute all active pixel prefix sums of each image by GPU multi-threads in parallel. This process can remove the background pixels tremendously, and the images are all compressed efficiently. Secondly, a dynamic programming model is built and solved to generate the optimal partitions of subimages for Direct Send image compositing, which ensures that the communication costs of Direct Send is minimal. In the experiments, we firstly measure the image compression ratios of our method, and obtain the optimal size of pixel blocks. Then, we compare the compositing time of our method with RLE and greedy algorithm on varying number of rendering nodes, showing that our method is more efficient than the existing two methods.