ISSN 1000-1239 CN 11-1777/TP

    2020 Data-Driven Network

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Journal of Computer Research and Development    2020, 57 (4): 669-670.   DOI: 10.7544/issn1000-1239.2020.qy0401
    Accepted: 01 April 2020

    Abstract786)   HTML53)    PDF (215KB)(361)       Save
    Related Articles | Metrics
    A Survey on Machine Learning Based Routing Algorithms
    Liu Chenyi, Xu Mingwei, Geng Nan, Zhang Xiang
    Journal of Computer Research and Development    2020, 57 (4): 671-687.   DOI: 10.7544/issn1000-1239.2020.20190866
    Abstract3078)   HTML117)    PDF (2198KB)(2360)       Save
    The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.
    Related Articles | Metrics
    A Review on the Application of Machine Learning in SDN Routing Optimization
    Wang Guizhi, Lü Guanghong, Jia Wucai, Jia Chuanghui, Zhang Jianshen
    Journal of Computer Research and Development    2020, 57 (4): 688-698.   DOI: 10.7544/issn1000-1239.2020.20190837
    Abstract1311)   HTML49)    PDF (1513KB)(989)       Save
    With the rapid development of network technology and the continuous emergence of new applications, the sharp increase in network data makes network management extremely complicated. Devices in traditional networks are diverse, complex in configuration, and difficult to manage, but the appearance of a new network architecture, such as software defined networking (SDN), brings dawn to network management, which gets rid of the limitation of hardware equipment to the network, and makes the network have the advantages of flexibility, programmability and so on. A good routing mechanism affects the performance of the whole network, the centralized control characteristics of SDN bring new research directions to the application of machine learning in routing mechanisms. First this paper discusses the current status of SDN routing optimization, and then summarizes the research on machine learning in SDN routing in recent years from the aspects of supervised learning and reinforcement learning. Finally, in order to meet the QoS (quality of service) of different applications and QoE (quality of experience) of different users, this paper puts forward the development trend of the data driven cognitive route. Giving the network nodes with cognitive behaviors such as perception, memory, search, decision-making, reasoning, explanation and so on, can speed up the path-finding process, optimize the route selection and improve the network management.
    Related Articles | Metrics
    Building Network Domain Knowledge Graph from Heterogeneous YANG Models
    Dong Yongqiang, Wang Xin, Liu Yongbo, Yang Wang
    Journal of Computer Research and Development    2020, 57 (4): 699-708.   DOI: 10.7544/issn1000-1239.2020.20190882
    Abstract1256)   HTML31)    PDF (4066KB)(661)       Save
    With the continuous expansion of network scale, network management and operation face great challenges of complexity and heterogeneity. The existing intelligent network operation approaches lack a unified data model at the knowledge level to guide the process of network big data. As a data modeling language, YANG has been used to model the configuration and state data transmitted by NETCONF protocol. This paper proposes an intelligent network operation scheme which builds network domain knowledge graph from heterogeneous YANG models. As per YANG language specification, the scheme proposes the basic principles of network domain ontology construction, forming an ontology structure containing 51 classes and more than 70 properties. Then, the heterogeneous YANG models from different standardization organizations and vendors are extracted and instantiated into network domain knowledge graph. Entity alignment methods are therein employed to explore the semantic co-reference relationships among uni-source YANG models. The acquired knowledge graph provides a unified semantic framework to organize massive network operation data, which thus eliminates the requirement to construct AIOps ontology manually. As such, the configuration management and operational maintenance of networks could be greatly simplified, enlightening new solutions for network performance optimization and anomaly detection problems.
    Related Articles | Metrics
    DNN Inference Acceleration via Heterogeneous IoT Devices Collaboration
    Sun Sheng, Li Xujing, Liu Min, Yang Bo, Guo Xiaobing
    Journal of Computer Research and Development    2020, 57 (4): 709-722.   DOI: 10.7544/issn1000-1239.2020.20190863
    Abstract752)   HTML17)    PDF (2695KB)(339)       Save
    Deep neural networks (DNNs) have been intensively deployed in a variety of intelligent applications (e.g., image and video recognition). Nevertheless, due to DNNs’ heavy computation burden, resource-constrained IoT devices are unsuitable to locally execute DNN inference tasks. Existing cloud-assisted approaches are severely affected by unpredictable communication latency and unstable performance of remote servers. As a countermeasure, it is a promising paradigm to leverage collaborative IoT devices to achieve distributed and scalable DNN inference. However, existing works only consider homogeneous IoT devices with static partition. Thus, there is an urgent need for a novel framework to adaptively partition DNN tasks and orchestrate distributed inference among heterogeneous resource-constrained IoT devices. There are two main challenges in this framework. First, it is difficult to accurately profile the DNNs’ multi-layer inference latency. Second, it is difficult to learn the collaborative inference strategy adaptively and in real-time in the heterogeneous environments. To this end, we first propose an interpretable multi-layer prediction model to abstract complex layer parameters. Furthermore, we leverage the evolutionary reinforcement learning (ERL) to adaptively determine the near-optimal partitioning strategy for DNN inference tasks. Real-world experiments based on Raspberry Pi are implemented, showing that our proposed method can significantly accelerate the inference speed in dynamic and heterogeneous environments.
    Related Articles | Metrics
    Bus-Data-Driven Forwarding Scheme for Urban Vehicular Networks
    Tang Xiaolan, Xu Yao, Chen Wenlong
    Journal of Computer Research and Development    2020, 57 (4): 723-735.   DOI: 10.7544/issn1000-1239.2020.20190876
    Abstract599)   HTML16)    PDF (2736KB)(219)       Save
    In urban vehicular ad hoc networks, due to the complex and dynamic traffic conditions and the diversity of driving routes, the network topology changes quickly and the communication links between vehicles are unstable, which affect the data forwarding performance of the vehicular networks. As an important public transportation facility in cities, buses have regular driving routes and departure time, and bus lines cover urban streets widely. Compared with private cars, buses are better data carriers and forwarders, and are helpful to achieve more reliable vehicle-to-vehicle communication. This paper proposes a bus-data-driven forwarding scheme for urban vehicular networks, called BUF, which aims to improve the transmission efficiency of urban vehicular networks by analyzing bus line data and selecting appropriate buses as forwarding nodes. First, a bus stop topology graph is constructed, in which all bus stops in the scenario are vertices and an edge links two vertices if there exist bus lines continuously passing through these two stops. The cost of an edge is computed based on the expected number of buses and the distance between two stops. Then the optimal forwarding path from the source stop to the destination stop is calculated by using Dijkstra algorithm. Moreover, in order to ensure that the data is forwarded along the optimal path, the neighbor backbone buses, whose overlapping degrees of subsequent stops with the optimal path are greater than zero, take priority to be selected as the forwarding nodes; and the greater the overlapping degree is, the higher priority the bus has to forward data. When no backbone bus exists, the neighbor buses, which will pass the expected next stop, called the supplement buses, are selected as relays. In the scenarios without backbone or supplement buses, private cars are used to establish a multi-hop link to find a suitable bus forwarder, in order to accelerate data forwarding. Experimental results with real Beijing road network and bus line data show that compared with other schemes, our BUF scheme achieves higher data delivery rate and shorter delay.
    Related Articles | Metrics
    Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset
    Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi
    Journal of Computer Research and Development    2020, 57 (4): 736-745.   DOI: 10.7544/issn1000-1239.2020.20190844
    Abstract674)   HTML28)    PDF (1903KB)(272)       Save
    The growth in cyber attacks on industrial control systems (ICS) highlights the need for network intrusion anomaly detection. Researchers have proposed various anomaly detection models for industrial control network traffic based on machine learning algorithms. However, adversarial example attacks are hindering the widespread application of machine learning models. Existing researches on adversarial example attacks focused on feature-rich/high-dimensional datasets. However, due to the relatively fixed network topology of the industrial control network system, the number of features in an ICS dataset is small. It is unknown whether the existing researches on adversarial examples work well for low-dimensional ICS datasets. We aim to analyze the relationship between four common optimization algorithms (namely, SGD, RMSProp, AdaDelta and Adam) and adversarial sample attacking capability, and analyze the defending capability of typical machine learning algorithms against adversarial example attacks through experiments on a low-dimensional natural Gas dataset. We also investigate whether adversarial examples-based training can improve the anti-attack ability of deep learning algorithms. Moreover, a new index “Year-to-Year Loss Rate” is proposed to evaluate the white-box attacking ability of adversarial examples. Experimental results show that for the natural Gas dataset: 1)the optimization algorithm does have an impact on the white-box attacking ability of adversarial examples; 2)the adversarial example has the ability in carrying out black-box attacks to each typical machine learning algorithm; 3)compared with decision tree, random forest, support vector machine, AdaBoost, logistic regression and convolutional neural network, recurrent neural network has the best capability in resisting black-box attack of adversarial examples; 4) adversarial example training can improve the defending ability of deep learning models.
    Related Articles | Metrics
    burst-Analysis Website Fingerprinting Attack Based on Deep Neural Network
    Ma Chencheng, Du Xuehui, Cao Lifeng, Wu Bei
    Journal of Computer Research and Development    2020, 57 (4): 746-766.   DOI: 10.7544/issn1000-1239.2020.20190860
    Abstract1012)   HTML34)    PDF (3969KB)(685)       Save
    Anonymous network represented by Tor is a communication intermediary network that hides user data transmission behavior. The criminals use anonymous networks to engage in cyber crimes, which cause great difficulties in network supervision. The website fingerprinting attack technology is a feasible technology for cracking anonymous communication. It can be used to discover the behavior of intranet users who secretly access sensitive websites based on anonymous network, which is an important mean of network supervision. The application of neural network in website fingerprinting attack breaks through the performance bottleneck of traditional methods, but the existing researches have not fully considered to design the neural network structures based on the characteristics of Tor traffic such as burst and the characteristics of website fingerprinting attack technology. There are problems that the neural network is too complicated and the analysis module is redundant, which leads to problems such as incomplete feature extraction and analysis and running slowly. Based on the researches and analysis of Tor traffic characteristics, a lightweight burst feature extraction and analysis module based on one-dimensional convolutional network is designed, and a burst-analysis website fingerprinting attack method based on deep neural network is proposed. Furthermore, aiming at the shortcoming of simply using the threshold method to analyze fingerprinting vectors in open world scenarios, a fingerprint vector analysis model based on random forest algorithm is designed. The classification accuracy of the improved model reaches 99.87% and the model has excellent performance in alleviating concept drift, bypassing defense techniques against website fingerprinting attacks, identifying Tor hidden websites, performance of models trained with a small amount of data, and run time, which improves the practicality of applying website fingerprinting attack technology to real networks.
    Related Articles | Metrics
    Selection of Network Defense Strategies Based on Stochastic Game and Tabu Search
    Sun Qian, Xue Leiqi, Gao Ling, Wang Hai, Wang Yuxiang
    Journal of Computer Research and Development    2020, 57 (4): 767-777.   DOI: 10.7544/issn1000-1239.2020.20190870
    Abstract571)   HTML17)    PDF (1861KB)(253)       Save
    The network defence strategy is the key factor to determine the effect of network security protection. In terms of the rational precondition of the existing network defence decision-making research and the parameter selection of the attack and defence benefit function, there are model deviations for the factors such as information asymmetry and legal punishment in the actual network attack and defence, which reduces the practicability and reliability of the strategy. In this paper, the Tabu random game model is constructed on the basis of the preconditions of bounded rationality, the Tabu search algorithm is introduced to analyze the bounded rationality of random game, and a search algorithm with memory function is designed. The data structure of the Tabu table is used to realize the memory function, and the data-driven memory combined with the game model is used to get the optimal defence strategy. The experimental results show that this method improves the accuracy in the quantification of attack and defence benefits, improves the accuracy of defence benefits compared with the existing typical methods, and the algorithm space complexity is better than the reinforcement learning and other typical algorithms.
    Related Articles | Metrics
    Unified Anomaly Detection for Syntactically Diverse Logs in Cloud Datacenter
    Zhang Shenglin, Li Dongwen, Sun Yongqian, Meng Weibin, Zhang Yuzhe, Zhang Yuzhi, Liu Ying, Pei Dan
    Journal of Computer Research and Development    2020, 57 (4): 778-790.   DOI: 10.7544/issn1000-1239.2020.20190875
    Abstract748)   HTML21)    PDF (3113KB)(383)       Save
    Benefit from the rapid development of natural language processing and machine learning methods, log based automatic anomaly detection is becoming increasingly popular for the software and hardware systems in cloud datacenters. Current unsupervised learning methods, requiring no labelled anomalies, still need to obtain a large number of normal logs and generally suffer from low accuracy. Although current supervised learning methods are accurate, they need much labelling efforts. This is because the syntax of different types of logs generated by different software/hardware systems varies greatly, and thus for each type of logs, supervised methods need sufficient anomaly labels to train its corresponding anomaly detection model. Meanwhile, different types of logs usually have the same or similar semantics when anomalies occur. In this paper, we propose LogMerge, which learns the semantic similarity among different types of logs and then transfers anomaly patterns across these logs. In this way, labelling efforts are reduced significantly. LogMerge employs a word embedding method to construct the vectors of words and templates, and then utilizes a clustering technique to group templates based on semantics, addressing the challenge that different types of logs are different in syntax. In addition, LogMerge combines CNN and LSTM to build an anomaly detection model, which not only effectively extracts the sequential feature of logs, but also minimizes the impact of noises in logs. We have conducted extensive experiments on publicly available datasets, which demonstrates that compared with the current supervised/unsupervised learning methods, LogMerge achieves higher accuracy. Moreover, LogMerge achieves high accuracy when there are few anomaly labels in the target type of logs, which therefore significantly reduces labelling efforts.
    Related Articles | Metrics
    The Optimization Method of Wireless Network Attacks Detection Based on Semi-Supervised Learning
    Wang Ting, Wang Na, Cui Yunpeng, Li Huan
    Journal of Computer Research and Development    2020, 57 (4): 791-802.   DOI: 10.7544/issn1000-1239.2020.20190880
    Abstract801)   HTML46)    PDF (2130KB)(426)       Save
    Aiming to optimize the attacks detection in high-dimensional and complex wireless network traffic data with deep learning technology, this paper proposed a WiFi-ADOM (WiFi network attacks detection optimization method) based on semi-supervised learning. Firstly, based on stacked sparse auto-encoder (SSAE), which is an unsupervised learning model, two types of network traffic feature representation vectors are proposed: new feature value vector and original feature weight value vector. Then, the original feature weight value vector is used to initialize the weight value of the supervised learning model deep neural network to obtain the preliminary result of the attack type, and the unsupervised learning clustering method Bi-kmeans is used to produce the corrective term for unknown attacks discrimination with the new feature value vectors. Finally, the preliminary result of the attack type and the corrective term of the unknown attacks discrimination are combined to obtain the final result of the attack type. Compared with the existing attacks detection methods with the public wireless network traffic data set AWID, the optimal performance of the method of WiFi-ADOM for network attacks detection is verified. At the same time, the importance of features in network attacks detection is explored. The results show that the method of WiFi-ADOM can effectively detect unknown attacks while ensuring detection performance.
    Related Articles | Metrics