Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 December 2019, Volume 56 Issue 12
Cross-Domain Adversarial Learning for Zero-Shot Classification
Liu Huan, Zheng Qinghua, Luo Minnan, Zhao Hongke, Xiao Yang, Lü Yanzhang
2019, 56(12):  2521-2535.  doi:10.7544/issn1000-1239.2019.20190614
Asbtract ( 927 )   HTML ( 31)   PDF (4855KB) ( 561 )  
Related Articles | Metrics
Zero-shot learning (ZSL) aims to recognize novel categories, which have few or even no sample for training and follow a different distribution from seen classes. With the recent advances of deep neural networks on cross-modal generation, encouraging breakthroughs have been achieved on classifying unseen categories with their synthetic samples. Extant methods synthesize unseen samples with the combination of generative adversarial nets (GANs) and variational auto-encoder (VAE) by sharing the generator and the decoder. However, due to the different data distributions produced by these two kinds of generative models, fake samples synthesized by the joint model follow the complex multi-domain distribution instead of satisfying a single model distribution. To address this issue, in this paper we propose a cross-domain adversarial generative network (CrossD-AGN) to integrate the traditional GANs and VAE into a unified framework, which is able to generate unseen samples based on the class-level semantics for zero-shot classification. We propose two symmetric cross-domain discriminators along with the cross-domain adversarial learning mechanism to learn to determine whether a synthetic sample is from the generator-domain or the decoder-domain distribution, so as to drive the generator/decoder of the joint model to improve its capacity of synthesizing fake samples. Extensive experimental results over several real-world datasets demonstrate the effectiveness and superiority of the proposed model on zero-shot visual classification.
End-to-end Knowledge Triplet Extraction Combined with Adversarial Training
Huang Peixin, Zhao Xiang, Fang Yang, Zhu Huiming, Xiao Weidong
2019, 56(12):  2536-2548.  doi:10.7544/issn1000-1239.2019.20190640
Asbtract ( 1010 )   HTML ( 35)   PDF (1562KB) ( 557 )  
Related Articles | Metrics
As a system to effectively represent the real world, knowledge graph has been widely concerned by academia and industry, and its ability to accurately represent knowledge is widely used in upper applications such as information service, intelligent search, and automatic question answering. A fact (knowledge) in form of triplet (head_entity, relation, tail_entity), is the basic unit of knowledge graph. Since facts in existing knowledge graphs are far from enough to describe the real world, acquiring more knowledge for knowledge graph completion and construction appears to be crucial. This paper investigates the problem of knowledge triplet extraction in the task of knowledge acquisition. This paper proposes an end-to-end knowledge triplet extraction method combined with adversarial training. Traditional techniques, whether pipeline or joint extraction, failed to discover the link between two subtasks of named entity recognition and relation extraction, which led to error propagation and worse extraction effectiveness. To overcome these flaws, in this paper, we adopt an entity and relation joint tagging strategy, and leverage an end-to-end framework to automatically tag the text and classify the tagging results. In addition, self-attention mechanism is added to assist the encoding of text, an objective function with bias term is additionally introduced to increase the attention of relevant entities, and the adversarial training is utilized to improve the robustness of the model. In experiments, we evaluate the proposed knowledge triplet extraction model via three evaluation metrics and analyze the experiments in four aspects. The experimental results verify that our model outperforms other state-of-the-art alternatives on knowledge triplet extraction.
Open Knowledge Graph Representation Learning Based on Neighbors and Semantic Affinity
Du Zhijuan, Du Zhirong, Wang Lu
2019, 56(12):  2549-2561.  doi:10.7544/issn1000-1239.2019.20190648
Asbtract ( 836 )   HTML ( 16)   PDF (3786KB) ( 607 )  
Related Articles | Metrics
Knowledge graph (KG) breaks the data isolation in different scenarios and provides basic support for the practical application. The representation learning transforms KG into the low-dimensional vector space to facilitate KG application. However, there are two problems in KG representation learning: 1)It is assumed that KG satisfies the closed-world assumption. It requires all entities to be visible during the training. In reality, most KGs are growing rapidly, e.g. a rate of 200 new entities per day in the DBPedia. 2)Complex semantic interaction, such as matrix projection and convolution, are used to improve the accuracy of the model and limit the scalability of the model. To this end, we propose a representation learning method TransNS for open KG that allows new entities to exist. It selects the related neighbors as the attribute of the entity to infer the new entity, and uses the semantic affinity between the entities to select the negative triple in the learning phase to enhance the semantic interaction capability. We compare our TransNS with the state-of-the-art baselines on 5 traditional and 8 new datasets. The results show that our TransNS performs well in the open KGs and even outperforms existing models on the baseline closed KGs.
Network Representation Learning Using the Optimizations of Neighboring Vertices and Relation Model
Ye Zhonglin, Zhao Haixing, Zhang Ke, Zhu Yu, Xiao Yuzhi
2019, 56(12):  2562-2577.  doi:10.7544/issn1000-1239.2019.20180566
Asbtract ( 687 )   HTML ( 16)   PDF (2559KB) ( 512 )  
Related Articles | Metrics
Network representation learning aims at embedding the network topology structures, vertex contents and other information of networks into the low-dimensional vector space, which thus provides an effective tool for network data mining, link prediction and recommendation system etc. However, the existing learning algorithms based on neural networks neglect the location information of the context vertices. Meanwhile, this kind of algorithms ignore the semantic associations between vertices and texts. Therefore, this paper proposes a novel network representation learning algorithm using the optimizations of neighboring vertices and relation model (NRNR). NRNR first uses the neighboring vertices to optimize the learning procedure, consequently, the location information of the vertices in the context windows is embedded into the network representations. In addition, NRNR first introduces the relational modeling from knowledge representation learning to learn the structure features of the networks, and the text contents between vertices are thus embedded into the network representations with the form of relational constraints. Moreover, NRNR proposes a feasible and effective network representation joint learning framework, which integrates the above two goals into a unified optimization objective function. The experimental results show that the proposed NRNR algorithm is superior to all kinds of baseline algorithms applied to the network node classification tasks in this paper. In network visualization tasks, the network representations obtained by NRNR algorithm show a distinct clustering boundary.
EasiFFRA: A Fast Feature Reduction Algorithm Based on Neighborhood Rough Set
Wang Nian, Peng Zhenghong, Cui Li
2019, 56(12):  2578-2588.  doi:10.7544/issn1000-1239.2019.20180541
Asbtract ( 760 )   HTML ( 4)   PDF (2710KB) ( 299 )  
Related Articles | Metrics
Extracting effective features from the high-dimensional and heterogeneous feature set is significant, which is the basis for the prediction and classification of Internet of things (IoT) applications. There are usually multiple sensors deployed in the system and quite a few features are extracted to make full use of the environment information. The high dimensional features always contain redundant and unrelated features, which reduces not only the speed of system, but also the performance of the classification. It’s necessary to recognize and delete them. Neighborhood rough set (NRS) is a popular method for dimensionality reduction, which deletes the unrelated and redundant features while keeping the separability of dataset. However, the NRS method has not been widely applied because of the huge computing cost. In this paper, a Easi fast feature reduction algorithm (EasiFFRA) is proposed based on the symmetry of adjacent domain relationships and the decision attribute filtering mechanism, which reduces the redundant computing by preferentially traversing the buckets with relatively concentrated neighbor samples distribution, and stores the samples into a Hash table that cannot belong to the positive region under the current feature subset. Furthermore, this method can reduce the number of distance calculation significantly through filtering the samples which have the same label with the current sample. Moreover, the algorithm validity is verified by a real world dataset, and 12 open datasets are used. The results show that compared with FHARA, EasiFFRA reduces the computing time by 75.45%. EasiFFRA algorithm reduces the effect of unrelated and redundant features on the results of classification and prediction, and enhances the real-time performance of the neighborhood rough set based features reduction method, which has important application value.
Deep Stack Least Square Classifier with Inter-Layer Model Knowledge Transfer
Feng Wei, Hang Wenlong, Liang Shuang, Liu Xuejun, Wang Hui
2019, 56(12):  2589-2599.  doi:10.7544/issn1000-1239.2019.20180741
Asbtract ( 812 )   HTML ( 7)   PDF (1105KB) ( 256 )  
Related Articles | Metrics
The traditional least square classifier (LSC) has been widely used in image recognition, speech recognition and other fields due to its simplicity and effectiveness. However, the traditional LSC may suffer from the weak generalization capacity when taking the natural data in their raw form as the input. In order to overcome this problem, a deep transfer least square classifier (DTLSC) is proposed on the basis of the stack generalization philosophy and the transfer learning mechanism. Firstly, following the stack generalization philosophy, DTLSC adopts LSC as the basic stacking unit to construct a deep stacking network, which avoids solving the non-convex optimization problem existing in traditional deep networks. Thus, the classification performance and the computational efficiency of the proposed network are improved. Secondly, transfer learning mechanism is used to leverage the model knowledge of the previous layers to help construction the model of the current layer such that the consistency of the inter-layer model is guaranteed. Thus, the generalization performance of the proposed DTLSC is further improved. In addition, the adaptive transfer learning strategy is introduced to selectively use the model knowledge of the previous layers, which alleviates the negative transfer effect by rejecting the uncorrelated model knowledge of the previous layer. Experimental results on synthetic datasets and real world datasets show the effectiveness of the proposed DTLSC.
Granular Vectors and K Nearest Neighbor Granular Classifiers
Chen Yuming, Li Wei
2019, 56(12):  2600-2611.  doi:10.7544/issn1000-1239.2019.20180572
Asbtract ( 526 )   HTML ( 5)   PDF (3361KB) ( 285 )  
Related Articles | Metrics
K nearest neighbor (KNN) classifier is a classical, simple and effective classifier. It has been widely employed in the fields of artificial intelligence and machine learning. Aiming at the problem that traditional classifiers are difficult to deal with uncertain data, we study a technique of neighborhood granulation of samples on each atom feature, construct some granular vectors, and propose a K nearest neighbor classification method based on these granular vectors in this paper. The method introduces a neighborhood rough set model to granulate samples in a classification system, and the raw data can be converted into some feature neighborhood granules. Then, a granular vector is induced by a set of neighborhood granules, and several operators of granular vectors are defined. We present two metrics of granular vectors which are relative granular distance and absolute granular distance, respectively. The monotonicity of distance of granular vectors is proved. Furthermore, the concept of K nearest neighbor granular vector is defined based on the distance of granular vectors, and K nearest neighbor granular classifier is designed. Finally, the K nearest neighbor granular classifier is compared with the classical K nearest neighbor classifier using several UCI datasets. Theoretical analysis and experimental results show that the K nearest neighbor granular classifier has better classification performance under suitable granulation parameters and k values.
Agent Negotiation Model Based on Round Limit Change of Non-Sparse Trust Networks
Wang Jindi, Tong Xiangrong
2019, 56(12):  2612-2622.  doi:10.7544/issn1000-1239.2019.20190163
Asbtract ( 428 )   HTML ( 9)   PDF (1596KB) ( 191 )  
Related Articles | Metrics
In the process of multiple Agent negotiation, trust value attracted less attention in related works. In fact, trust value will directly affect the utility and strategy of negotiation. Most of previous works researched direct trust. However, it will lead to the sparse of trust relationship matrix if lacks the direct trust relationship, which may reduce the using efficiency of trust relationships. In addition, most related works ignored the influence of trust on negotiation strategy and negotiation rounds. According to the problems, this paper considers the indirect trust and direct trust through trust transfer to form a non-sparse trust network. Therefore, Agent could select reliable negotiation rivals based on the trust values of bidders. Facing the negotiation rivals with different trust values, Agent will adopt different strategies. The paper improves the round limit function and bidding function in negotiation model. Agent will pay more patience to the rivals with high trust values. Therefore, the round limit and bidding space will be extended and bidding will be properly increased, while the rivals with low trust values will be conducted reverse operations. Comparison experiments prove that new model is more reliable than those models that do not consider trust attribute. The negotiation model has better performance on success rate and utility of negotiation.
MUS Enumeration Based on Double-Model
Ouyang Dantong, Gao Han, Tian Naiyu, Liu Meng, Zhang Liming
2019, 56(12):  2623-2631.  doi:10.7544/issn1000-1239.2019.20180852
Asbtract ( 440 )   HTML ( 5)   PDF (1033KB) ( 201 )  
Related Articles | Metrics
The MUS (minimal unsatisfiable subset) for solving unsatisfiable problems is an important research direction in the field of artificial intelligence. The MARCO-M method is currently the most efficient way of solving MUS. It uses a single way to enumerate MUS, called maximal-model, without further effective pruning. For the shortcoming of MARCO-M, MARCO-MAM method is proposed which uses maximal-middle model to enumerate MUS. It emphasizes the characteristic that the complexity of solving satisfiable problem is less than the one of solving unsatisfiable problem, that is, solving an MSS is easier than solving an MUS. There are two solutions when using middle-model to improve the efficiency of MUS enumeration, if an MSS(maximal satisfiable subset) is found, the unexplored space of MUS can be pruned by blocking down the MSS. Else, an MUS is found that the times of the unsatisfiable iteration will be reduced. The double-model selects seeds from the top and middle of the hasse diagram, respectively, rather than a single top-down. The maximal model does not effectively use other excellent techniques to reduce the solution space when enumerating MUS. The experimental results show that the MARCO-MAM method is more efficient than the MARCO-M method, especially in large-scale problems or large search spaces.
Dialect Language Recognition Based on Multi-Task Learning
Qin Chenguang, Wang Hai, Ren Jie, Zheng Jie, Yuan Lu, Zhao Zixin
2019, 56(12):  2632-2640.  doi:10.7544/issn1000-1239.2019.20190101
Asbtract ( 691 )   HTML ( 15)   PDF (1776KB) ( 900 )  
Related Articles | Metrics
Development of deep learning and neural networks in recent years has led to new solutions to the complicated pattern recognition problems of speech recognition. In order to reinforce the protection of Chinese dialects, to improve the accuracy of dialect language recognition and the diversity of speech signal pre-processing modules for language recognition, this paper proposes a single-task dialect language recognition model, SLNet, on the basis of LSTM and currently the most widely used model in the field of speech recognition. Considering the diversity and complexity of Chinese dialects, on the basis of a multi-task learning parameter sharing mechanism, we use a neural network model to discover the implicit correlation characteristics of different dialects and propose the MTLNet, a dialect recognition model based on multilingual tasking. Further considering the regional characteristics of Chinese dialects, we adopt a multi-task learning model based on hard parameter sharing to construct the ATLNet, a multi-task learning neural network model based on auxiliary tasks. We design several sets of experiments to compare a single-task dialect language recognition model with the MTLNet and ATLNet models proposed in this paper. The results show multi-task methods improve the accuracy of language recognition to 80.2% on average and make up the singularity and weak generalization of the single-task model.
Long Term Recurrent Neural Network with State-Frequency Memory
Zhuang Liansheng, Lü Yang, Yang Jian, Li Houqiang
2019, 56(12):  2641-2648.  doi:10.7544/issn1000-1239.2019.20180474
Asbtract ( 499 )   HTML ( 7)   PDF (1083KB) ( 330 )  
Related Articles | Metrics
Modeling time series has become one of the research hotspots in the field of machine learning because of its important application value. Recurrent neural network (RNN) is a crucial tool for modeling time series in recent years. However, existing RNNs are commonly hard to learn long-term dependency in the temporal domain and unable to model the frequency patterns in time series. The two problems seriously limit the performance of existing RNNs for the time series that contain long-term dependencies and rich frequency components. To solve these problems, we propose the long term recurrent neural network with state-frequency memory (LTRNN-SFM), which allows the network to model the uncovered features in both frequency and temporal domains by replacing state vector of the hidden layer in conventional RNNs to state-frequency matrix. Meanwhile, the proposed network can effectively avoid the interference of the gradient vanishing and exploding problems by separating neurons in the same layer, using activation functions such as rectified linear unit (ReLU) and clipping weight. In this way, a LTRNN-SFM with long-term memory and multiple layers can be trained easily. Experimental results have demonstrated that the proposed network achieves the best performance in processing time series with long-term dependencies and rich frequency components.
The Autonomous Safe Landing Area Determination Method and Obstacle Avoidance Strategy
Jin Tao, Zhang Dengyi, Cai Bo
2019, 56(12):  2649-2659.  doi:10.7544/issn1000-1239.2019.20190218
Asbtract ( 428 )   HTML ( 4)   PDF (4235KB) ( 185 )  
Related Articles | Metrics
The autonomous detection technology of lunar surface obstacles and the selection of safe landing sites are the key to realize the safe landing of the probe on the lunar surface. The autonomous safe landing of lunar exploration power descent phase is simulated, and a simulation algorithm for autonomous safe landing of spacecraft based on simulation is proposed. Firstly, the lunar region captured by the CCD camera carried by the detector is determined, and the lunar remote sensing image is simulated based on ray tracing algorithm and elevation data. Then, based on edge detection, obstacles such as rocks and craters on the target surface of the detector are obtained, and these obstacles are fitted elliptically to determine the obstacle area. On the basis of determining obstacle areas, using morphological operation of images, the remote sensing images of each layer of the moon are acquired by using the Gauss image pyramid, and the multi-frame images are processed in real time to determine the safe landing area accurately. In the stage of obstacle avoidance after the landing area is basically determined, a spiral search strategy for obstacle avoidance of detector based on three-dimensional reconstruction is proposed. On the basis of traditional image matching recognition, the algorithm firstly reconstructs the lunar landing area with high precision according to the laser three-dimensional imager carried by the detector, and then searches the reconstructed area spirally to select a more precise and safer landing site for the soft landing target. Finally, the effectiveness of the proposed algorithm is verified by simulation experiments.
Time-Varying Underwater Acoustic Channel Based Physical Layer Secret Key Generation Scheme
Xu Ming, Fan Yimeng, Jiang Changjun
2019, 56(12):  2660-2670.  doi:10.7544/issn1000-1239.2019.20190040
Asbtract ( 457 )   HTML ( 6)   PDF (1462KB) ( 188 )  
Related Articles | Metrics
With the continuous development of wireless networks, the security of physical layer has gradually become the focus of widespread concern. Concerning the problem of how to extract a highly confidential key from the source information when legitimate nodes have more uncertainty than that of eavesdropping node under the circumstances of multipath and Doppler effects in underwater acoustic channel, a time-varying underwater acoustic channel based physical layer secret key generation scheme is proposed. For the first time, the α order Rényi entropy with multipath and Doppler effects is accurately depicted, and the uncertainty of the source sequence from the legitimate nodes and the eavesdropping node is also obtained. On this basis, a key agreement protocol with strong security is proposed, which uses Hash function to construct one-variable high-order polynomial to complete identity authentication for both sides of communication and to realize secure transmission of index sequence and preselected key under the public channel. Moreover, a privacy amplification protocol against active attacks is designed using bilinear mapping, which does not depend on the length and randomness of the random seed. The robustness, confidentiality and correctness of the scheme are proved by the information theory. The simulation results show that the upper bound of key leakage rate is 3.74×10\+\{-6\} and the upper bound of active attack success rate is 5.468×10\+\{-20\} when the amount of the source information is 50 000 b, which verifies the feasibility of the proposed scheme.
Consensus Mechanism Based on Threshold Cryptography Scheme
Wang Zuan, Tian Youliang, Yue Chaoyue, Zhang Duo
2019, 56(12):  2671-2683.  doi:10.7544/issn1000-1239.2019.20190053
Asbtract ( 855 )   HTML ( 25)   PDF (1793KB) ( 530 )  
Related Articles | Metrics
Aiming at the huge resource consumption, the bottleneck of the system performance and “tragedy of the commons” in the PoW(proof of work) consensus mechanism of bitcoin, we analyze the “tragedy of the commons” caused by only transaction fees rewarding in the later stage of the bitcoin system from the perspective of game theory and propose a consensus mechanism based on threshold cryptography (TCCM) in this paper. Firstly, the new consensus protocol introduces the idea of margin, and proposes a margin model based on threshold group signature theory. The model not only ensures the security of the margin, but also provides a guarantee for the node to honestly produce the block. Secondly, a bidding model of the right of accounting is also constructed using the idea of threshold encryption to generate a node that can produce the block. This model can guarantee the fairness of the bidding model environment and select the accounting node randomly. Then, a new incentive mechanism is redesigned based on the original block rewards so that more nodes can participate in the consensus process. Finally, the results of security and performance analysis show that TCCM not only effectively reduces the huge resource consumption, but also improves the transaction processing efficiency and makes the whole blockchain system more secure.
Process Abnormal Detection Based on System Call Vector Space in Cloud Computing Environments
Chen Xingshu, Chen Jiaxin, Jin Xin, Ge Long
2019, 56(12):  2684-2693.  doi:10.7544/issn1000-1239.2019.20180843
Asbtract ( 532 )   HTML ( 6)   PDF (1511KB) ( 417 )  
Related Articles | Metrics
The intrusion detection scheme based on system call in the traditional host domain often monitors the running behavior of a single privileged process. It is difficult to effectively detect the abnormal process behavior of the virtual machine using the host intrusion detection scheme because of more security risks in the cloud computing environment. To break this limitation, a virtual machine process behavior detection model based on system call vector space is proposed. The model collects system call data of different operating system without using agent in the virtual machine. The TF-IDF (term frequency-inverse document frequency) algorithm idea is introduced to weight the process system call data to distinguish different running services in the virtual machine and identify abnormal process behavior. Furthermore, in order to optimize the efficiency of the detection algorithm, a storage strategy combining compressed sparse row (CSR) matrix and K-dimension tree is designed. Eventually a prototype system called VMPBD (virtual machine process behavior detecting) has been implemented on the platform of KVM (kernel-based virtual machine). The functions and performance of VMPBD is tested on Linux and Windows virtual machines. The results show that VMPBD can effectively detect the abnormal behavior of the virtual machine processes, and the detection false alarm rate and system performance overhead are within the acceptable range.
Post Quantum Authenticated Key Exchange Protocol Based on Ring Learning with Errors Problem
Li Zichen, Xie Ting, Zhang Juanmei, Xu Ronghua
2019, 56(12):  2694-2701.  doi:10.7544/issn1000-1239.2019.20180874
Asbtract ( 502 )   HTML ( 7)   PDF (951KB) ( 392 )  
Related Articles | Metrics
The rapid development of quantum computer technology poses serious threat to the security of the traditional public-key cryptosystem, and it is imperative to focus on designing and deploying post-quantum cryptosystems that can withstand quantum attacks. A post quantum authenticated key exchange (AKE) protocol based on ring learning with errors (RLWE) problem is proposed by using encryption construction method. First, introduce an IND-CPA secure public-key encryption scheme, which uses ciphertext compression technology. By applying a variant of the Fujisaki-Okamoto transform to create an IND-CCA secure key encapsulation mechanism. An authenticated key exchange protocol is proposed through implicit authentication, which is a provable security protocol under standard eCK model and can achieve weak perfect forward security. The protocol selects a centered binomial distribution as error distribution that has higher sampling efficiency, also sets reasonable parameters to ensure that both of parties to the communication obtain the same session key. The security of the protocol is 313 b tested by LWE tester. The protocol avoids the error-reconciliation mechanism originally proposed by Ding. Compared with the existing AKE protocol schemes based on difficult problems of lattice, the corresponding of communication is also significantly reduced. The protocol has smaller public key, private key and ciphertext size, and also it enjoys even stronger provable security guarantees. It is a more concise and efficient post-quantum AKE protocol.
The Role of Architecture Simulators in the Process of CPU Design
Zhang Qianlong, Hou Rui, Yang Sibo, Zhao Boyan, Zhang Lixin
2019, 56(12):  2702-2719.  doi:10.7544/issn1000-1239.2019.20190044
Asbtract ( 736 )   HTML ( 15)   PDF (1614KB) ( 948 )  
Related Articles | Metrics
As Moore’s law goes to an end, the improvement of CPU performance is increasingly dependent on the optimization and improvement of CPU microarchitecture which heavily relies on the assistance of architecture simulator. Therefore, CPU architecture simulator plays an increasingly important role in the design of high performance CPUs, for example, architecture simulator can be helpful in exploring the CPU microarchitecture, verifying the logic design before actual tape-out, building post-silicon test environment and starting to develop firmware, operating system and hypervisor before CPU is ready. In this paper, we summarize the experience of academia and industrial CPU vendors in developing and using architecture simulators, by which we clarify and summarize the important role of architecture simulators in the CPU design process and how to develop and optimize architecture simulators. First, we introduce the relationship between open source architecture simulators and CPU design, then we summarize and analyze the methodologies and experience of how to do well-known industrial CPU vendors develop and use architecture simulators in the process of CPU design. Second, we summarize the methodologies of how to calibrate and optimize architecture simulators, after that, some suggestions on the design and usage methodology of architecture simulators are put forward. Third, we summarize the scale-up and scale-out optimization methods of architecture simulators and introduce some new architecture simulators. At the end of the paper, we summarize the paper and put forward some problems in developing new architecture simulators.
Optimum Research on Inner-Inst Memory Access Conflict for Dataflow Architecture
Ou Yan, Feng Yujing, Li Wenming, Ye Xiaochun, Wang Da, Fan Dongrui
2019, 56(12):  2720-2732.  doi:10.7544/issn1000-1239.2019.20190115
Asbtract ( 525 )   HTML ( 15)   PDF (1752KB) ( 353 )  
Related Articles | Metrics
The rapid development of artificial intelligence application, such as neural network, image recognition and test recognition, brings huge challenges to traditional processors. Coarse-grained dataflow architectures become hotspot for AI application because it possesses the characteristic of high instruction-level parallelism. At the same time, it remains broadly applicable and adaptable. However, with processing elements of coarse-dataflow adapt random access memory as memory, combined with the property of intensive memory requirement of neural networks, there are lots of memory access conflicts in inner-inst. After analyzing the memory access behavior of AI applications, it is found that there are a large number of inner-inst memory access conflicts which greatly degrade the utilization of computing units. Based on this observation, in dataflow processors, a flexible data redundancy strategy (FRS) for inner-inst memory access conflict is proposed to allocate multi-storage for operand access requests which induce conflicts in inner-inst during compile stage. By using FRS, the number of conflicts in the RAM is effectively degraded. We use typical AI application benchmarks in the experiments, such as LeNet, AlexNet. The experimental results show that FRS improves power efficiency by 30.21% and 12.37% compared with Round-Robin none-data redundancy strategy and Re-Hash none-data redundancy strategy, and by 27.95% compared with 2 multi-data redundancy strategy.
Extending PCM Lifetime by Redirecting Writes from the Most Modified Byte
Gao Peng, Wang Dongsheng, Wang Haixia
2019, 56(12):  2733-2743.  doi:10.7544/issn1000-1239.2019.20180267
Asbtract ( 413 )   HTML ( 13)   PDF (3967KB) ( 238 )  
Related Articles | Metrics
Structure of recent memory system is always comprised of multiple memory chips, where it concatenates data lines of each chip, and shares the address lines. Consequently, the service time of such a memory system, especially built by the write sensitive memory device such PCM, could be diminished due to the bucket effect. Specifically, it means that some storage chips are wearing faster than others because of writing difference among storage chips. So, the present work first proves the existence of bucket effect by numerical experiment and data analysis. Then, a hybrid memory design method termed RMB (redirecting the most-modified byte) is proposed to prolong the endurance of the PCM based memory system. Along with PCM chips, the system works with an additional long-life auxiliary chip to that whose writing can be redirected from any PCM chip with more modified times than the other chips. The method comes with following two advantages simultaneously: the wearing of the most modified chip as well as that of all PCM chips are reduced, and the writing differences among all PCM chips are balanced. The evaluations prove that it successfully enhances the endurance of the memory system at most 7.9x than the memory without wear mitigating technique, and at most 5.14x than the state-of-the-art technique PRES.