Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 April 2020, Volume 57 Issue 4
A Survey on Machine Learning Based Routing Algorithms
Liu Chenyi, Xu Mingwei, Geng Nan, Zhang Xiang
2020, 57(4):  671-687.  doi:10.7544/issn1000-1239.2020.20190866
Asbtract ( 3056 )   HTML ( 115)   PDF (2198KB) ( 2342 )  
Related Articles | Metrics
The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.
A Review on the Application of Machine Learning in SDN Routing Optimization
Wang Guizhi, Lü Guanghong, Jia Wucai, Jia Chuanghui, Zhang Jianshen
2020, 57(4):  688-698.  doi:10.7544/issn1000-1239.2020.20190837
Asbtract ( 1305 )   HTML ( 48)   PDF (1513KB) ( 980 )  
Related Articles | Metrics
With the rapid development of network technology and the continuous emergence of new applications, the sharp increase in network data makes network management extremely complicated. Devices in traditional networks are diverse, complex in configuration, and difficult to manage, but the appearance of a new network architecture, such as software defined networking (SDN), brings dawn to network management, which gets rid of the limitation of hardware equipment to the network, and makes the network have the advantages of flexibility, programmability and so on. A good routing mechanism affects the performance of the whole network, the centralized control characteristics of SDN bring new research directions to the application of machine learning in routing mechanisms. First this paper discusses the current status of SDN routing optimization, and then summarizes the research on machine learning in SDN routing in recent years from the aspects of supervised learning and reinforcement learning. Finally, in order to meet the QoS (quality of service) of different applications and QoE (quality of experience) of different users, this paper puts forward the development trend of the data driven cognitive route. Giving the network nodes with cognitive behaviors such as perception, memory, search, decision-making, reasoning, explanation and so on, can speed up the path-finding process, optimize the route selection and improve the network management.
Building Network Domain Knowledge Graph from Heterogeneous YANG Models
Dong Yongqiang, Wang Xin, Liu Yongbo, Yang Wang
2020, 57(4):  699-708.  doi:10.7544/issn1000-1239.2020.20190882
Asbtract ( 1242 )   HTML ( 31)   PDF (4066KB) ( 659 )  
Related Articles | Metrics
With the continuous expansion of network scale, network management and operation face great challenges of complexity and heterogeneity. The existing intelligent network operation approaches lack a unified data model at the knowledge level to guide the process of network big data. As a data modeling language, YANG has been used to model the configuration and state data transmitted by NETCONF protocol. This paper proposes an intelligent network operation scheme which builds network domain knowledge graph from heterogeneous YANG models. As per YANG language specification, the scheme proposes the basic principles of network domain ontology construction, forming an ontology structure containing 51 classes and more than 70 properties. Then, the heterogeneous YANG models from different standardization organizations and vendors are extracted and instantiated into network domain knowledge graph. Entity alignment methods are therein employed to explore the semantic co-reference relationships among uni-source YANG models. The acquired knowledge graph provides a unified semantic framework to organize massive network operation data, which thus eliminates the requirement to construct AIOps ontology manually. As such, the configuration management and operational maintenance of networks could be greatly simplified, enlightening new solutions for network performance optimization and anomaly detection problems.
DNN Inference Acceleration via Heterogeneous IoT Devices Collaboration
Sun Sheng, Li Xujing, Liu Min, Yang Bo, Guo Xiaobing
2020, 57(4):  709-722.  doi:10.7544/issn1000-1239.2020.20190863
Asbtract ( 741 )   HTML ( 17)   PDF (2695KB) ( 335 )  
Related Articles | Metrics
Deep neural networks (DNNs) have been intensively deployed in a variety of intelligent applications (e.g., image and video recognition). Nevertheless, due to DNNs’ heavy computation burden, resource-constrained IoT devices are unsuitable to locally execute DNN inference tasks. Existing cloud-assisted approaches are severely affected by unpredictable communication latency and unstable performance of remote servers. As a countermeasure, it is a promising paradigm to leverage collaborative IoT devices to achieve distributed and scalable DNN inference. However, existing works only consider homogeneous IoT devices with static partition. Thus, there is an urgent need for a novel framework to adaptively partition DNN tasks and orchestrate distributed inference among heterogeneous resource-constrained IoT devices. There are two main challenges in this framework. First, it is difficult to accurately profile the DNNs’ multi-layer inference latency. Second, it is difficult to learn the collaborative inference strategy adaptively and in real-time in the heterogeneous environments. To this end, we first propose an interpretable multi-layer prediction model to abstract complex layer parameters. Furthermore, we leverage the evolutionary reinforcement learning (ERL) to adaptively determine the near-optimal partitioning strategy for DNN inference tasks. Real-world experiments based on Raspberry Pi are implemented, showing that our proposed method can significantly accelerate the inference speed in dynamic and heterogeneous environments.
Bus-Data-Driven Forwarding Scheme for Urban Vehicular Networks
Tang Xiaolan, Xu Yao, Chen Wenlong
2020, 57(4):  723-735.  doi:10.7544/issn1000-1239.2020.20190876
Asbtract ( 597 )   HTML ( 16)   PDF (2736KB) ( 219 )  
Related Articles | Metrics
In urban vehicular ad hoc networks, due to the complex and dynamic traffic conditions and the diversity of driving routes, the network topology changes quickly and the communication links between vehicles are unstable, which affect the data forwarding performance of the vehicular networks. As an important public transportation facility in cities, buses have regular driving routes and departure time, and bus lines cover urban streets widely. Compared with private cars, buses are better data carriers and forwarders, and are helpful to achieve more reliable vehicle-to-vehicle communication. This paper proposes a bus-data-driven forwarding scheme for urban vehicular networks, called BUF, which aims to improve the transmission efficiency of urban vehicular networks by analyzing bus line data and selecting appropriate buses as forwarding nodes. First, a bus stop topology graph is constructed, in which all bus stops in the scenario are vertices and an edge links two vertices if there exist bus lines continuously passing through these two stops. The cost of an edge is computed based on the expected number of buses and the distance between two stops. Then the optimal forwarding path from the source stop to the destination stop is calculated by using Dijkstra algorithm. Moreover, in order to ensure that the data is forwarded along the optimal path, the neighbor backbone buses, whose overlapping degrees of subsequent stops with the optimal path are greater than zero, take priority to be selected as the forwarding nodes; and the greater the overlapping degree is, the higher priority the bus has to forward data. When no backbone bus exists, the neighbor buses, which will pass the expected next stop, called the supplement buses, are selected as relays. In the scenarios without backbone or supplement buses, private cars are used to establish a multi-hop link to find a suitable bus forwarder, in order to accelerate data forwarding. Experimental results with real Beijing road network and bus line data show that compared with other schemes, our BUF scheme achieves higher data delivery rate and shorter delay.
Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset
Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi
2020, 57(4):  736-745.  doi:10.7544/issn1000-1239.2020.20190844
Asbtract ( 666 )   HTML ( 28)   PDF (1903KB) ( 268 )  
Related Articles | Metrics
The growth in cyber attacks on industrial control systems (ICS) highlights the need for network intrusion anomaly detection. Researchers have proposed various anomaly detection models for industrial control network traffic based on machine learning algorithms. However, adversarial example attacks are hindering the widespread application of machine learning models. Existing researches on adversarial example attacks focused on feature-rich/high-dimensional datasets. However, due to the relatively fixed network topology of the industrial control network system, the number of features in an ICS dataset is small. It is unknown whether the existing researches on adversarial examples work well for low-dimensional ICS datasets. We aim to analyze the relationship between four common optimization algorithms (namely, SGD, RMSProp, AdaDelta and Adam) and adversarial sample attacking capability, and analyze the defending capability of typical machine learning algorithms against adversarial example attacks through experiments on a low-dimensional natural Gas dataset. We also investigate whether adversarial examples-based training can improve the anti-attack ability of deep learning algorithms. Moreover, a new index “Year-to-Year Loss Rate” is proposed to evaluate the white-box attacking ability of adversarial examples. Experimental results show that for the natural Gas dataset: 1)the optimization algorithm does have an impact on the white-box attacking ability of adversarial examples; 2)the adversarial example has the ability in carrying out black-box attacks to each typical machine learning algorithm; 3)compared with decision tree, random forest, support vector machine, AdaBoost, logistic regression and convolutional neural network, recurrent neural network has the best capability in resisting black-box attack of adversarial examples; 4) adversarial example training can improve the defending ability of deep learning models.
burst-Analysis Website Fingerprinting Attack Based on Deep Neural Network
Ma Chencheng, Du Xuehui, Cao Lifeng, Wu Bei
2020, 57(4):  746-766.  doi:10.7544/issn1000-1239.2020.20190860
Asbtract ( 1010 )   HTML ( 34)   PDF (3969KB) ( 684 )  
Related Articles | Metrics
Anonymous network represented by Tor is a communication intermediary network that hides user data transmission behavior. The criminals use anonymous networks to engage in cyber crimes, which cause great difficulties in network supervision. The website fingerprinting attack technology is a feasible technology for cracking anonymous communication. It can be used to discover the behavior of intranet users who secretly access sensitive websites based on anonymous network, which is an important mean of network supervision. The application of neural network in website fingerprinting attack breaks through the performance bottleneck of traditional methods, but the existing researches have not fully considered to design the neural network structures based on the characteristics of Tor traffic such as burst and the characteristics of website fingerprinting attack technology. There are problems that the neural network is too complicated and the analysis module is redundant, which leads to problems such as incomplete feature extraction and analysis and running slowly. Based on the researches and analysis of Tor traffic characteristics, a lightweight burst feature extraction and analysis module based on one-dimensional convolutional network is designed, and a burst-analysis website fingerprinting attack method based on deep neural network is proposed. Furthermore, aiming at the shortcoming of simply using the threshold method to analyze fingerprinting vectors in open world scenarios, a fingerprint vector analysis model based on random forest algorithm is designed. The classification accuracy of the improved model reaches 99.87% and the model has excellent performance in alleviating concept drift, bypassing defense techniques against website fingerprinting attacks, identifying Tor hidden websites, performance of models trained with a small amount of data, and run time, which improves the practicality of applying website fingerprinting attack technology to real networks.
Selection of Network Defense Strategies Based on Stochastic Game and Tabu Search
Sun Qian, Xue Leiqi, Gao Ling, Wang Hai, Wang Yuxiang
2020, 57(4):  767-777.  doi:10.7544/issn1000-1239.2020.20190870
Asbtract ( 566 )   HTML ( 17)   PDF (1861KB) ( 253 )  
Related Articles | Metrics
The network defence strategy is the key factor to determine the effect of network security protection. In terms of the rational precondition of the existing network defence decision-making research and the parameter selection of the attack and defence benefit function, there are model deviations for the factors such as information asymmetry and legal punishment in the actual network attack and defence, which reduces the practicability and reliability of the strategy. In this paper, the Tabu random game model is constructed on the basis of the preconditions of bounded rationality, the Tabu search algorithm is introduced to analyze the bounded rationality of random game, and a search algorithm with memory function is designed. The data structure of the Tabu table is used to realize the memory function, and the data-driven memory combined with the game model is used to get the optimal defence strategy. The experimental results show that this method improves the accuracy in the quantification of attack and defence benefits, improves the accuracy of defence benefits compared with the existing typical methods, and the algorithm space complexity is better than the reinforcement learning and other typical algorithms.
Unified Anomaly Detection for Syntactically Diverse Logs in Cloud Datacenter
Zhang Shenglin, Li Dongwen, Sun Yongqian, Meng Weibin, Zhang Yuzhe, Zhang Yuzhi, Liu Ying, Pei Dan
2020, 57(4):  778-790.  doi:10.7544/issn1000-1239.2020.20190875
Asbtract ( 740 )   HTML ( 21)   PDF (3113KB) ( 379 )  
Related Articles | Metrics
Benefit from the rapid development of natural language processing and machine learning methods, log based automatic anomaly detection is becoming increasingly popular for the software and hardware systems in cloud datacenters. Current unsupervised learning methods, requiring no labelled anomalies, still need to obtain a large number of normal logs and generally suffer from low accuracy. Although current supervised learning methods are accurate, they need much labelling efforts. This is because the syntax of different types of logs generated by different software/hardware systems varies greatly, and thus for each type of logs, supervised methods need sufficient anomaly labels to train its corresponding anomaly detection model. Meanwhile, different types of logs usually have the same or similar semantics when anomalies occur. In this paper, we propose LogMerge, which learns the semantic similarity among different types of logs and then transfers anomaly patterns across these logs. In this way, labelling efforts are reduced significantly. LogMerge employs a word embedding method to construct the vectors of words and templates, and then utilizes a clustering technique to group templates based on semantics, addressing the challenge that different types of logs are different in syntax. In addition, LogMerge combines CNN and LSTM to build an anomaly detection model, which not only effectively extracts the sequential feature of logs, but also minimizes the impact of noises in logs. We have conducted extensive experiments on publicly available datasets, which demonstrates that compared with the current supervised/unsupervised learning methods, LogMerge achieves higher accuracy. Moreover, LogMerge achieves high accuracy when there are few anomaly labels in the target type of logs, which therefore significantly reduces labelling efforts.
The Optimization Method of Wireless Network Attacks Detection Based on Semi-Supervised Learning
Wang Ting, Wang Na, Cui Yunpeng, Li Huan
2020, 57(4):  791-802.  doi:10.7544/issn1000-1239.2020.20190880
Asbtract ( 795 )   HTML ( 45)   PDF (2130KB) ( 418 )  
Related Articles | Metrics
Aiming to optimize the attacks detection in high-dimensional and complex wireless network traffic data with deep learning technology, this paper proposed a WiFi-ADOM (WiFi network attacks detection optimization method) based on semi-supervised learning. Firstly, based on stacked sparse auto-encoder (SSAE), which is an unsupervised learning model, two types of network traffic feature representation vectors are proposed: new feature value vector and original feature weight value vector. Then, the original feature weight value vector is used to initialize the weight value of the supervised learning model deep neural network to obtain the preliminary result of the attack type, and the unsupervised learning clustering method Bi-kmeans is used to produce the corrective term for unknown attacks discrimination with the new feature value vectors. Finally, the preliminary result of the attack type and the corrective term of the unknown attacks discrimination are combined to obtain the final result of the attack type. Compared with the existing attacks detection methods with the public wireless network traffic data set AWID, the optimal performance of the method of WiFi-ADOM for network attacks detection is verified. At the same time, the importance of features in network attacks detection is explored. The results show that the method of WiFi-ADOM can effectively detect unknown attacks while ensuring detection performance.
Cybersecurity Challenges from the Perspective of Emergence
Qu Leilei, Xiao Ruojin, Shi Wenchang, Liang Bin, Qin Bo
2020, 57(4):  803-823.  doi:10.7544/issn1000-1239.2020.20190379
Asbtract ( 566 )   HTML ( 12)   PDF (1781KB) ( 300 )  
Related Articles | Metrics
The security in the cyberspace undoubtedly belongs to emergent properties in nature. This kind of emergent properties brings about severe challenges to cybersecurity. A number of efforts of research on emergent phenomena related to the security in the cyberspace have been seen both home and abroad. Up till now, a lot of significant results have been achieved by this kind of research. However, people’s knowledge of emergence in cybersecurity is still far from sufficient. For this situation, the challenges to cybersecurity are observed systemically from the perspective of emergence to promote the development of innovative ideas and theories in cybersecurity. At first, fundamental concepts of emergence in cybersecurity are revealed based on the original meaning of emergence in systems science. Secondly, challenges of emergent security to the cyberspace are explored with consideration to attacks, vulnerabilities, and defenses. Then, the state-of-the-art of research on the emergence of security in the cyberspace is analyzed in a way that it has been divided into three categories, which include descriptive research, directive research, and operational research. Finally, with the focus on fundamental theories, basic models, and practical tools, discussions are made to answer the question about how to further the study in the future in the field of emergent security in the cyberspace.
Survey of Access-Driven Cache-Based Side Channel Attack
Miao Xinliang, Jiang Liehui, Chang Rui
2020, 57(4):  824-835.  doi:10.7544/issn1000-1239.2020.20190581
Asbtract ( 787 )   HTML ( 33)   PDF (1953KB) ( 617 )  
Related Articles | Metrics
In recent years, massive heterogeneous IoT (Internet of things) terminal devices carry the core functions, and they are easier to be the direct targets of attackers. Besides, more terminal devices and cloud platforms are suffering from cache-based side channel attacks. These attacks construct the fine-grained and the concealed cache side channel to extract sensitive data (such as encryption keys) from the target devices, which defeats the isolation mechanism. In this paper, we focus on access-driven cache-based side channel attack technology. Firstly, the fundamental principle and the current research status of cache-based side channel attack are present. Then, "Evict+Reload" attack, "Prime+Probe" attack and "Flush+Reload" attack, which belong to access-driven cache-based side channel attack, are described mainly. Especially, the attack principle, implementation process and attack effect are elaborated through theoretical analysis and experimental verification. After that, the characteristics and applications of the above three attacks are discussed, and the comparison results are given. Furthermore, the current challenges in LLC (last-level cache) attack and noise elimination are proposed. Finally, the future research directions are pointed out in the era of IoE (Internet of everything), in terms of the gradual change of the cache hierarchy, the massive data storage of the cloud platforms, and the widespread deployment of TEE (trusted execution environment) on physical devices.
Cyber Security Threat Intelligence Sharing Model Based on Blockchain
Huang Kezhen, Lian Yifeng, Feng Dengguo, Zhang Haixia, Liu Yuling, Ma Xiangliang
2020, 57(4):  836-846.  doi:10.7544/issn1000-1239.2020.20190404
Asbtract ( 1590 )   HTML ( 84)   PDF (3685KB) ( 957 )  
Related Articles | Metrics
In the process of increasing cyber security attack and defense confrontation, there is a natural asymmetry between the offensive and defensive sides. The CTI (cyber security threat intelligence) sharing is an effective method to improve the responsiveness and effectiveness of the protection party. However, there is a contradiction between the privacy protection requirements of CTI sharing and the need to build a complete attack chain. Aiming at the above contradiction, this paper proposes a blockchain-based CTI sharing model, which uses the account anonymity of the blockchain technology to protect the privacy of CTI sharing party, and at the same time utilizes the tamper-free and accounting of the blockchain technology to prevent the “free-riding” behavior in CTI sharing and guarantee the benefit of CTI sharing party. The one-way encryption function is used to protect the private information in CTI, then the model uses the encrypted CTI to build a complete attack chain, and uses the traceability of the blockchain technology to complete the decryption of the attack source in the attack chain. The smart contract mechanism of the blockchain technology is used to implement an automated early warning and response against cyber security threats. Finally, the feasibility and effectiveness of the proposed model are verified by simulation experiments.
Towards Spatial Range Queries Under Local Differential Privacy
Zhang Xiaojian, Fu Nan, Meng Xiaofeng
2020, 57(4):  847-858.  doi:10.7544/issn1000-1239.2020.20190360
Asbtract ( 574 )   HTML ( 14)   PDF (5032KB) ( 293 )  
Related Articles | Metrics
User data collection and analysis with local differential privacy has attracted considerable attention in recent years. The trade-off among the domain size of user data, encoding method, and perturbation method directly constrains the accuracy of spatial range query. To remedy the deficiency caused by the current encoding and perturbating method, this paper employs grid and quadtree to propose an efficient solution, called GT-R, to answer spatial range query. GT-R uses a uniform grid to decompose the data domain, and generate unit sized regions. Based on these regions, an indexing quadtree is built. And then each user encodes his/her data with the quadtree shared from server, and runs the optimal randomized response on each node of the sampled level in the quadtree and reports the sampled level along with the perturbed value. The server accumulates reports from users to reconstruct a quadtree comprising sum of reports from all users. Besides, to boost the accuracy of range query, the server relies on post-processing skill for consistency on the frequency of each node. GT-R method is compared with existing methods on the large-scale real datasets. The experimental results show that GT-R outperforms its competitors, achieves the accurate results of spatial range query.
A Survey on Algorithm Research of Scene Parsing Based on Deep Learning
Zhang Rui, Li Jintao
2020, 57(4):  859-875.  doi:10.7544/issn1000-1239.2020.20190513
Asbtract ( 1337 )   HTML ( 75)   PDF (4042KB) ( 796 )  
Related Articles | Metrics
Scene parsing aims to predict the category of each pixel in a scene image. Scene parsing is a fundamental and important task in computer vision. It has great significance of analyzing and understanding scene images, and has a wide range of applications in many fields such as automatic driving, video surveillance, and augmented reality. Recently, scene parsing algorithm based on deep learning has a breakthrough, and achieves great improvement compared with the traditional scene parsing algorithms. In this survey, we firstly analyze and describe the three difficulties in scene parsing, including fine-grained parsing results, multiple scale deformations, and strong spatial relationships. Then we focus on the “convolutional-deconvolutional” framework which is widely used in most of the deep learning based scene parsing algorithms. Furthermore, we introduce the newly proposed scene parsing algorithm based on deep learning in recent years. To tackle the three difficulties in scene parsing, the recent deep learning based algorithms employ high-resolution feature maps, multi-scale information and contextual information to further improve the performance of scene parsing. After that, we briefly introduce the common public scene parsing datasets. Finally, we make the conclusion for scene parsing algorithm based on deep learning and point out some potential opportunities.
A Semantic Segmentation Method of Traffic Scene Based on Categories-Aware Domain Adaptation
Jia Yingxia, Lang Congyan, Feng Songhe
2020, 57(4):  876-887.  doi:10.7544/issn1000-1239.2020.20190475
Asbtract ( 655 )   HTML ( 27)   PDF (2589KB) ( 430 )  
Related Articles | Metrics
As a basic and crucial research issue in the field of machine vision, image semantic segmentation aims to classify every pixel in a color image and predict its corresponding semantic label. Most of the existing semantic segmentation methods are all supervised learning models that are excessively dependent on the given per-pixel annotations. Although existing segmentation methods based on weakly supervision and semi-supervision learning can be integrated into unlabeled samples, semantic category mis-classification often occurs due to the lack of effective utilization of spatial semantic information, and it is difficult to directly apply to other cross-domain unlabeled data sets. In order to solve those problems, this paper proposes a semantic segmentation method based on categories-aware domain adaptation for cross-domain unlabeled data sets. Firstly, the proposed method adopts the optimized upsampling method and proposed a new loss function based on focal loss, which is an effective solution to the problem that it is very difficult to segment the categories with small data volume in the existing methods. Secondly, a categories-aware domain adaptation method is proposed to improve the mIoU of semantic segmentation of unlabeled images of target domain by 6%, compared with the state-of-the-art methods. The proposed method is verified on five data sets, and the experimental results fully demonstrate the effectiveness and generalization of the proposed method.
Min-Entropy Transfer Adversarial Hashing
Zhuo Junbao, Su Chi, Wang Shuhui, Huang Qingming
2020, 57(4):  888-896.  doi:10.7544/issn1000-1239.2020.20190476
Asbtract ( 449 )   HTML ( 9)   PDF (2278KB) ( 149 )  
Related Articles | Metrics
Owing to its storage and retrieval efficiency, hashing is widely applied to large-scale image retrieval. Most of existing deep hashing methods assume that the database in the target domain is identically distributed with the training set in the source domain. However, in practical applications, such assumption is so strict that there exists considerable domain discrepancy between source and target domain. To address such cross-domain image retrieval problem, some research works introduce domain adaptation techniques into image retrieval methods. The goal is to enhance the generalization ability of the learned hashing function. However, the learned Hash codes lack discrimination and domain-invariance in existing cross-domain hashing methods. We propose semantic preservation module and min-entropy loss to tackle these issues. We simply construct a classification sub-network as semantic preservation module to fully utilize labels in source domain. Semantic information encoded in labels can be passed to hashing learning network, which encourages learned Hash codes to contain more semantic information and discriminativity. As for unlabeled target domain samples, the entropy of their classification responses characterizes the confidence of classifier. Ideal target classification responses should tend to be one-hot vectors which minimizes the entropy. Therefore, we add minimization entropy loss to our model. Minimizing the entropy of classification responses of target samples aligns the distribution between source and target domain in classifier responses space. Therefore, the learned Hash codes tend to be more domain-invariant. With the semantic preservation module and min-entropy loss, we construct an end-to-end deep neural network for cross-domain image retrieval. Extensive experiments show the superiority of our model over existing state-of-the-art methods.