Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 July 2020, Volume 57 Issue 7
A Component-Level Dynamic Power-Aware Energy-Saving Mechanism for Backbone Networks
Zhang Jinhong, Wang Xingwei, Yi Bo, Huang Min
2020, 57(7):  1347-1368.  doi:10.7544/issn1000-1239.2020.20190776
Asbtract ( 206 )   HTML ( 3)   PDF (7563KB) ( 140 )  
Related Articles | Metrics
With a progressive increase of Internet traffic year by year, power consumption in the Internet is rising at an alarming rate, and the consequent environmental problems, e.g. the greenhouse effect caused by the surging carbon footprint and so on, have also aroused continuous concerns on a global scale, which are more serious especially in the backbone network where the aggregated traffic is transmitted. The oversupply principle for traditional Internet resources further aggravates these severe situations. With regard to this situation, a component-level dynamic power-aware energy-saving mechanism is devised over the backbone network in this paper. In the proposed mechanism, firstly, the incoming traffic size of nodes is dynamically predicted for a short term; then the fine-grained port number conversion algorithm is adopted to determine the number of ports to be regulated; then the corresponding ports convert their power states according to the sleeping and awakening rules; finally a novel hierarchical scheduling algorithm is devised to schedule the packets. In the simulation, based on the real traffic distribution traces over three typical backbone networks, we determine prediction parameters, test the proportionality of tracing load by power efficiency, explore the impacts of adopting different prediction time slot series and the different number of traffic load counters on the accuracy of load prediction, analyze the impacts of overestimation error and underestimation error of traffic load prediction that might appear on power consumption and discuss the tradeoff between power efficiency and actual performance in different application scenarios. Results demonstrate that the component-level power control mechanism proposed in the paper can control the power consumption of each network component dynamically and proportionally with a fine granularity and has a significantly energy-saving benefit.
Research on User Behavior Understanding and Personalized Service Recommendation Algorithm in Twitter Social Networks
Yu Yaxin, Liu Meng, Zhang Hongyu
2020, 57(7):  1369-1380.  doi:10.7544/issn1000-1239.2020.20190158
Asbtract ( 320 )   HTML ( 4)   PDF (3443KB) ( 285 )  
Related Articles | Metrics
With the rapid development of social networks in recent years, a large amount of short text data with time-spacial information is produced accordingly. Due to short length of text and sparseness of geographic location, it is very difficult to capture the semantic topics of user behavior. In addition, most existing research work related to user behavior understanding has not taken the behavior elements dependency into account, which results in the incomplete understanding of user behavior. Based on these, two models mixed with time, activity and region, i.e., user-time-activity model (UTAM) and user-time-region model (UTRM), are proposed firstly in this paper so as to explore behavior principles effectively. And then, by extracting activity-service topics based on latent Dirichlet allocation (LDA) techniques, an activity-to-service topic model (ASTM) is proposed in order to mine corresponding relationships between activities and services. Finally, a novel matrix factorization algorithm fused with distance and coupled similarity, i.e., matrix factorization based on couple & distance (MFCD), is put forward to improve the recommendation quality. In order to verify the effectiveness of proposed models and algorithms, extensive experiments are executed on a real Twitter dataset. Experimental results show that the proposed models can improve the quality of personalized recommendation service greatly, and the performance of MFCD algorithm is superior to the traditional matrix factorization algorithm on the effect of understanding user behaviors.
Research on a Device-free Passive Indoor Regional Localization Method
Li Ruonan, Li Jinbao
2020, 57(7):  1381-1392.  doi:10.7544/issn1000-1239.2020.20190585
Asbtract ( 150 )   HTML ( 0)   PDF (3599KB) ( 80 )  
Related Articles | Metrics
Indoor regional localization is widely applied in the fields of medical care, smart buildings and so on. The most prominent problem in indoor area localization is the interference effect of the dynamic and unpredictable nature (such as multipath propagation, channel fading, etc.) of radio channel effect on the received signal strength(RSS). To reduce the radio interference, this paper proposes a new CNN-BiLSTM indoor region localization model of attention-based mechanism, which reduces the dependence of RSS sequence on channel variation by building the relationship between the coarse-grained features and location regions. Above all, the convolutional neural network (CNN) is used to acquire the characteristics of the RSS sequence to extract the fine-grained features of the regional center point. Then, the storage memory characteristics of bidirectional long short-term memory (BiLSTM) is applied to learn the coarse-grained features of the implied region scope in the current and past RSS sequences. Finally, when the attentional mechanism and the fused coarse-grained features are applied, the mapping relationship between RSS sequence features and regional locations is built, and the regional location information is obtained. In real indoor environment,the experimental results of regional location show that compared with the grid region comprehensive probability location model with the best positioning effect,the proposed method improves the accuracy and adaptability of regional location while reducing computational complexity.
Distributed Time Division Multiple Access Protocol Based on Energy Harvesting
Xu Juan, Zhang Rong, Kan Jiali, Zhang Yan
2020, 57(7):  1393-1403.  doi:10.7544/issn1000-1239.2020.20190269
Asbtract ( 82 )   HTML ( 1)   PDF (859KB) ( 35 )  
Related Articles | Metrics
Terahertz wireless nanosensor networks (WNSNs) are novel networks interconnecting multiple nano-devices by means of wireless communication. Nanosensors can obtain ultra-high-speed transmission rates using communications in the terahertz band, and medium access control (MAC) protocols play an important role in regulating the access to the terahertz channel and coordinating transmission orders among nanosensors. However, classical MAC protocols are not applicable due to the existing molecular absorption noise in terahertz channel and the very limited energy of nanodevices. In this paper, a distributed energy harvesting-based time division multiple access (DEH-TDMA) protocol is proposed, which aims to overcome the energy limitations of nanosensors and the catastrophic collisions in terahertz WNSNs based on a modulation scheme called time spread on-off keying (TS-OOK). The protocol adopts the piezoelectric energy harvesting system, where a Markov decision process (MDP) model is firstly constructed by considering the remaining energy and the number of packets in the buffer as state information, then the number of transmitted packets and the energy consumption are considered as impacting factors in designing the reward function of MDP model, so each nanosensor can dynamically access the channel according to its own state after solving an optimal strategy. Simulation results show that DEH-TDMA has advantages in extending the network life cycle.
Efficient Public Encryption Scheme with Keyword Search for Cloud Storage
Guo Lifeng, Li Zhihao, Hu Lei
2020, 57(7):  1404-1414.  doi:10.7544/issn1000-1239.2020.20190671
Asbtract ( 169 )   HTML ( 0)   PDF (1181KB) ( 97 )  
Related Articles | Metrics
Public key encryption with keyword search (PEKS) is a promise cryptography technique in cloud storage which not only can ensure the privacy of stored data but also has search function. In order to resist internal off-line keyword guessing attack, the current solution is to introduce the sender’s secret key and public key to make the keyword ciphertext to realize authentication function. But in these schemes, the receiver must delegate the sender in advance. This situation does not meet the actual requirements that the receiver does not want to delegate the sender. In order to satisfy these applications, we propose an efficient PEKS scheme and prove its security in the standard model. Our PEKS scheme achieves three advantages: Firstly, by introducing the identity of the sender and the server, our scheme can resist the internal and external off-line keyword guessing attack. Furthermore, the scheme doesn’t need to delegate the sender; secondly, by introducing the server’s private key and public key, the trapdoor can be transmitted by a public channel; thirdly, because anyone can verify the correctness of the keyword search ciphertext of keyword search, the scheme can resist chosen keyword ciphertext attack.
A New Automatic Search Method for Cryptographic S-Box
Zhang Runlian, Sun Yaping, Wei Yongzhuang, Li Yingxin
2020, 57(7):  1415-1423.  doi:10.7544/issn1000-1239.2020.20190537
Asbtract ( 129 )   HTML ( 0)   PDF (579KB) ( 49 )  
Related Articles | Metrics
The cryptographic S-boxes are core component in too many symmetric encryption algorithms, which usually determine the security strength of these algorithms. The secure evaluation indicators for these cryptographic S-boxes contain balancedness, algebraic degree, nonlinearity, and differential uniformity etc. How to design the cryptographic S-boxes that have some robust abilities (indicators) against both the traditional attacks and the side channel attacks such as power attacks appears to be a rather difficult task. Currently, the automatic search tools, such as CA(cellular automata), neural network, etc, have became the research hotspots regarding to the design of the cryptographic S-box, except to the classical algebraic construction. Based on the CA rules, a new search method for S-box is proposed, which uses the strategy of partial fixed and separate searching for the variable components. More specifically, in the first place, the features of CA rules of this method is described. Moreover, the strategy of partial fixed and separate searching for the variable components according to the properties of cryptographic S-boxes is constructed. Finally, some new S-boxes are achieved and their features of these S-boxes are also evaluated. It is shown that too many 4×4 optimal S-boxes are attained. In particular, three classes of 4×4 sub-optimal S-boxes can also be transformed to some 4×4 optimal S-boxes under the CA rules of this method. Compared with the previous well-known results, these new 4×4 optimal S-boxes have lower transparency order so that they have a robuster ability against side channel attacks.
Review of Entity Relation Extraction Methods
Li Dongmei, Zhang Yang, Li Dongyuan, Lin Danqiong
2020, 57(7):  1424-1448.  doi:10.7544/issn1000-1239.2020.20190358
Asbtract ( 857 )   HTML ( 3)   PDF (1404KB) ( 694 )  
Related Articles | Metrics
There is a phenomenon that information extraction has long been concerned by a lot of research works in the field of natural language processing. Information extraction mainly includes three sub-tasks: entity extraction, relation extraction and event extraction, among which relation extraction is the core mission and a great significant part of information extraction. Furthermore, the main goal of entity relation extraction is to identify and determine the specific relation between entity pairs from plenty of natural language texts, which provides fundamental support for intelligent retrieval, semantic analysis, etc, and improves both search efficiency and the automatic construction of the knowledge base. Then, we briefly expound the development of entity relation extraction and introduce several tools and evaluation systems of relation extraction in both Chinese and English. In addition, four main methods of entity relation extraction are mentioned in this paper, including traditional relation extraction methods, and other three methods respectively based on traditional machine learning, deep learning and open domain. What is more important is that we summarize the mainstream research methods and corresponding representative results in different historical stages, and conduct contrastive analysis concerning different entity relation extraction methods. In the end, we forecast the contents and trend of future research.
Brain Networks Classification Based on an Adaptive Multi-Task Convolutional Neural Networks
Xing Xinying, Ji Junzhong, Yao Yao
2020, 57(7):  1449-1459.  doi:10.7544/issn1000-1239.2020.20190186
Asbtract ( 238 )   HTML ( 0)   PDF (2299KB) ( 121 )  
Related Articles | Metrics
Brain networks classification is an important subject in brain science. In recent years, brain networks classification based on convolutional neural networks has become a hot topic. However, it is still difficult to accurately classify brain network data with high dimension and small sample size. Due to the close relationship between different clinical phenotypes and brain networks of different populations, it is highly possible to provide auxiliary information for the brain networks classification. Therefore, we propose a new brain networks classification method based on an adaptive multi-task convolutional neural network in this paper. Firstly, the clinical phenotype predictions are introduced as different auxiliary tasks and the shared representation mechanism of multi-task convolutional neural networks is used to provide general and useful information for brain networks classification. Then, in order to reduce the experimental cost and the error caused by the manual operation, a new adaptive method is proposed to substitute for manual adjustments of the weight of every task in the multi-task learning. The experimental results on the autism brain imaging data exchange I (ABIDE I) dataset show that the multi-task convolutional neural networks which introduce clinical phenotype predictions can achieve better classification results. Moreover, the adaptive multi-task learning method can further improve the performance of brain networks classification.
Iterative Entity Alignment via Re-Ranking
Zeng Weixin, Zhao Xiang, Tang Jiuyang, Tan Zhen, Wang Wei
2020, 57(7):  1460-1471.  doi:10.7544/issn1000-1239.2020.20190643
Asbtract ( 126 )   HTML ( 0)   PDF (1454KB) ( 51 )  
Related Articles | Metrics
Existing knowledge graphs (KGs) inevitably suffer from the problem of incompleteness. One feasible approach to tackle this issue is by introducing knowledge from other KGs. During the process of knowledge integration, entity alignment (EA), which aims to find equivalent entities in different KGs, is the most crucial step, as entities are the pivots that connect heterogeneous KGs. State-of-the-art EA solutions mainly rely on KG structure information for judging the equivalence of entities, whereas most entities in real-life KGs are in low degrees and contain limited structural information. Additionally, the lack of supervision signals also constrains the effectiveness of EA models. In order to tackle aforementioned issues, we propose to combine entity name information, which is not affected by entity degree, with structural information, to convey more comprehensive signals for aligning entities. Upon this basic EA framework, we further devise a curriculum learning based iterative training strategy to increase the scale of labelled data with confident EA pairs selected from the results of each round. Moreover, we exploit word mover’s distance model to optimize the utilization of entity name information and re-rank alignment results, which in turn boosts the accuracy of EA. We evaluate our proposal on both cross-lingual and mono-lingual EA tasks against strong existing methods, and the experimental results reveal that our solution outperforms the state-of-the-arts by a large margin.
Minimal Conflict Set Solving Method Combined with Fault Logic Relationship
Ouyang Dantong, Gao Han, Xu Yini, Zhang Liming
2020, 57(7):  1472-1480.  doi:10.7544/issn1000-1239.2020.20190338
Asbtract ( 103 )   HTML ( 0)   PDF (714KB) ( 42 )  
Related Articles | Metrics
Model-based diagnosis is an important research direction in the field of artificial intelligence, and solving the MCS (minimal conflict set) is an important step to solve the diagnosis problem. The MCS-SFFO(minimal conflict set-structural feature of fault output) method searches the set enumeration tree (SE-Tree) by a reverse depth-first way and then prunes the combination of fault output-independent components. Based on the MCS-SFFO method, a further pruning method for solving the minimal conflict set MCS-FLR(minimal conflict set-fault logic relationship) is proposed based on the fault logic relationship of the circuit. The non-conflict theorem of the single-component is proposed, which prunes the single component, to avoid the solution-free space. Secondly, the non-minimum conflict set theorem is proposed, that is, the supersets of the fault output related is all conflict sets, and the non-minimum conflict set can be further pruned in the solution space. Based on the MCS-SFFO method, the MCS-FLR method further prunes both the solution space as well as the solution-free space, which reduces the number of times the solution space and part of the solution-free space call SAT solver, saving the solution times. The experimental results show that compared with the MCS-SFFO method, the efficiency of the MCS-FLR method is significantly improved.
Extended S-LSTM Based Textual Entailment Recognition
Hu Chaowen, Wu Changxing, Yang Yalian
2020, 57(7):  1481-1489.  doi:10.7544/issn1000-1239.2020.20190522
Asbtract ( 158 )   HTML ( 0)   PDF (807KB) ( 106 )  
Related Articles | Metrics
Text entailment recognition aims at automatically determining whether there is an entailment relationship between the given premise and hypothesis (usually two sentences). It is a basic and challenging task in natural language processing. Current dominant models, which are based on deep learning, usually encode the semantic representations of two sentences separately, instead of considering them as a whole. Besides, most of them do not leverage both the sentence-level global and ngram-level local information when capturing the semantic relationship. The recently proposed S-LSTM can learn semantic representations of a sentence and its ngrams simultaneously, achieving promising performance on tasks such as text classification. Considering the above, a model based on an extended S-LSTM is proposed for textual entailment recognition. On the one hand, S-LSTM is extended to learn semantic representations of the premise and hypothesis simultaneously, which regards them as a whole. On the other hand, to obtain better semantic representation, both the sentence-level and ngram-level information are used to capture the semantic relationships. Experimental results, on the English SNLI dataset and Chinese CNLI dataset, show that the performance of the proposed model is better than baselines.
A Method of Map Outlines Generation Based on Smartphone Sensor Data
Tao Tao, Sun Yu’e, Chen Dongmei, Yang Wenjian, Huang He, Luo Yonglong
2020, 57(7):  1490-1507.  doi:10.7544/issn1000-1239.2020.20190605
Asbtract ( 123 )   HTML ( 1)   PDF (6082KB) ( 49 )  
Related Articles | Metrics
With the development of the economy, environmental maps are becoming more and more important to our daily lives. The existing mechanisms of map generation are mainly based on vehicle-driven GPS equipment for data acquisition and road network construction. However, these methods have the disadvantages of low precision and poor efficiency, and the methods cannot construct the map for some areas where the acquisition equipment is difficult to reach or the GPS signal is weak. In order to solve the problems mentioned above, this paper proposes an idea of constructing a map through mining the sensor data generated by the widely used smartphones. Based on this idea, a data fusion algorithm is proposed. Firstly, the machine learning classification algorithm and signal processing technology are used to identify the traveling state. And then, the segmentation mechanism is combined with the dynamic time warping algorithm to process the steering segment. Finally, the local map outline is generated by the fusion of the distance data and direction data of the effective segment. The experimental results based on the data collected from the real road network prove the effectiveness of the proposed method in the construction of local map outlines and the feasibility of deep mining sensor data.
Research on Face Anti-Spoofing Algorithm Based on DQ_LBP
Shu Xin, Tang Hui, Yang Xibei, Song Xiaoning, Wu Xiaojun
2020, 57(7):  1508-1521.  doi:10.7544/issn1000-1239.2020.20190319
Asbtract ( 136 )   HTML ( 0)   PDF (4066KB) ( 77 )  
Related Articles | Metrics
As face recognition technology has been integrated into human daily life, face spoofing detection as a key step before face recognition has attracted more and more attention. For print attack and video attack, we propose a difference quantization local binary pattern (DQ_LBP) algorithm for refining the feature of traditional local binary pattern (LBP) by quantifying the difference between the value of central pixel and its neighborhood pixels. DQ_LBP can extract the difference information between the local pixels without increasing the original dimension of LBP, and thus be able to describe the local texture features of images more accurately. In addition, we use the spatial pyramid (SP) algorithm to calculate the histogram of DQ_LBP features in different color spaces and cascade them into a unified feature vector, so as to obtain more elaborate local color texture information and spatial structure information from the face sample, thus, the fraud face detection performance of the algorithm in this paper has been further improved. Extensive experiments are conducted on three challenging face anti-spoofing databases (CASIA FASD, Replay-Attack, and Replay-Mobile) and show that our algorithm has better performance compared with the state of the art. Moreover, it has great potential in the application of real-time devices.
Antagonistic Video Generation Method Based on Multimodal Input
Yu Haitao, Yang Xiaoshan, Xu Changsheng
2020, 57(7):  1522-1530.  doi:10.7544/issn1000-1239.2020.20190479
Asbtract ( 134 )   HTML ( 0)   PDF (2785KB) ( 63 )  
Related Articles | Metrics
Video generation is an important and challenging task in the field of computer vision and multimedia. The existing video generation methods based on generative adversarial networks (GANs) usually lack an effective scheme to control the coherence of video. The realization of artificial intelligence algorithms that can automatically generate real video is an important indicator of more complete visual appearance information and motion information understanding.A new multi-modal conditional video generation model is proposed in this paper. The model uses pictures and text as input, gets the motion information of video through text feature coding network and motion feature decoding network, and generates video with coherence motion by combining the input images. In addition, the method predicts video frames by affine transformation of input images, which makes the generated model more controllable and the generated results more robust. The experimental results on SBMG (single-digit bouncing MNIST gifs), TBMG(two-digit bouncing MNIST gifs) and KTH(kungliga tekniska hgskolan human actions) datasets show that the proposed method performs better on both the target clarity and the video coherence than existing methods. In addition, qualitative evaluation and quantitative evaluation of SSIM(structural similarity index) and PSNR(peak signal to noise ratio) metrics demonstrate that the proposed multi-modal video frame generation network plays a key role in the generation process.
Task-Adaptive End-to-End Networks for Stereo Matching
Li Tong, Ma Wei, Xu Shibiao, Zhang Xiaopeng
2020, 57(7):  1531-1538.  doi:10.7544/issn1000-1239.2020.20190478
Asbtract ( 118 )   HTML ( 0)   PDF (2818KB) ( 53 )  
Related Articles | Metrics
Estimating depth/disparity information from stereo pairs via stereo matching is a classical research topic in computer vision. Recently, along with the development of deep learning technologies, many end-to-end deep networks have been proposed for stereo matching. These networks generally borrow convolutional neural network (CNN) structures originally designed for other tasks to extract features. These structures are generally redundant for the task of stereo matching. Besides, 3D convolutions in these networks are too complex to be extended for large perception fields which are helpful for disparity estimation. In order to overcome these problems, we propose a deep network structure based on the properties of stereo matching. In the proposed network, a concise and effective feature extraction module is presented. Moreover, a separated 3D convolution is introduced to avoid parameter explosion caused by increasing the size of convolution kernels. We validate our network on the dataset of SceneFlow in aspects of both accuracy and computation costs. Results show that the proposed network obtains state-of-the-art performance. Compared with the other structures, our feature extraction module can reduce 90% parameters and 25% time cost while achieving comparable accuracy. At the same time, our separated 3D convolution, accompanied by group normalization (GN), achieves lower end-point-error (EPE) than baseline methods.
Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
Lu Haifeng, Gu Chunhua, Luo Fei, Ding Weichao, Yang Ting, Zheng Shuai
2020, 57(7):  1539-1554.  doi:10.7544/issn1000-1239.2020.20190291
Asbtract ( 257 )   HTML ( 1)   PDF (2544KB) ( 147 )  
Related Articles | Metrics
In the mobile edge computing, the local device can offload tasks to the server near the edge of the network for data storage and computation processing, thereby reducing the delay and power consumption of the service. Therefore, the task offloading decision has great research value. This paper first constructs an offloading model with multi-service nodes and multi-dependencies within mobile tasks in large-scale heterogeneous mobile edge computing. Then, an improved deep reinforcement learning algorithm is proposed to optimize the task offloading strategy by combining the actual application scenarios of mobile edge computing. Finally, the advantages and disadvantages of each offloading strategy are analyzed by comprehensively comparing the energy consumption, cost, load balancing, delay, network usage and average execution time. The simulation results show that the improved HERDRQN algorithm based on long short-term memory (LSTM) network and HER (hindsight experience replay) has good effects on energy consumption, cost, load balancing and delay. In addition, this paper uses various algorithm strategies to offload a certain number of applications, and compares the number distribution of heterogeneous devices under different CPU utilizations to verify the relationship between the offloading strategy and each evaluation index, so as to prove that the strategy generated by HERDRQN algorithm is scientific and effective in solving the task offloading problem.
A Lightweight Scalable Protocol for Public Blockchain
Chen Huan, Wang Yijie
2020, 57(7):  1555-1567.  doi:10.7544/issn1000-1239.2020.20190552
Asbtract ( 168 )   HTML ( 1)   PDF (2117KB) ( 128 )  
Related Articles | Metrics
Blockchain technology solves the fundamental problem of building trust in an untrusted environment, which is regarded as a new disruptive technology after clouding computing, the IoT, and the artificial intelligence. However, the current public blockchains face two serious problems: on one hand, the low system throughput cannot meet the needs of large-scale applications; on the other hand, the ever-growing nature of blockchain could be quite cumbersome for validators since they consume much disk and RAM resources. Existing works were often used to improve system throughput, ignoring the increasingly serious problem of blockchain data growth for long run. Thus, in this work, we propose the PocketChain, a scalable and storage-friendly lightweight protocol that can achieve high throughput and low storage without sacrificing decentralization and security. Firstly, PocketChain uses a stateless client design, using the RSA accumulator to combine the large state into one short commitment, so that the validators can only store the block headers, greatly reducing the disk and RAM requirements. Secondly, PocketChain applies the stateless client to the sharding technology, which not only improves the system throughput, but overcomes the state migration problem caused by periodical reshuffling of sharding technology. This can further increase the security of sharding by improving the reshuffling frequency. The experiment results show that the PocketChain can reduce the storage overhead of validators and linearly improve the system throughput.