ISSN 1000-1239 CN 11-1777/TP

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Journal of Computer Research and Development    2021, 58 (8): 1573-1574.   DOI: 10.7544/issn1000-1239.2021.qy0801
    Abstract1343)   HTML285)    PDF (214KB)(911)       Save
    Related Articles | Metrics
    Passive-Aggressive Learning with Feature Evolvable Streams
    Liu Yanfang, Li Wenbin, Gao Yang
    Journal of Computer Research and Development    2021, 58 (8): 1575-1585.   DOI: 10.7544/issn1000-1239.2021.20210330
    Abstract475)   HTML5)    PDF (855KB)(349)       Save
    In many real-world applications, data are collected in the form of a feature evolvable stream. For instance, old features of data gathered by limited-lifespan sensors disappear and new features emerge at the same time along with the sensors exchanging simultaneously. Online passive-aggressive algorithms have proven to be effective in learning linear classifiers from datasets with both a fixed feature space and a trapezoidal feature space. Therefore, in this paper we propose a new feature evolvable learning based on passive-aggressive update strategy (PAFE), which utilizes the margin to modify the current classifier. The proposed algorithm learns two models through passive-aggressive update strategy from the current features and recovered features of the vanished features. Specifically, we both recover the vanished features and mine the initialization of the current model from the overlapping periods in which both old and new features are available. Furthermore, we use two ensemble methods to improve performance: combining the predictions from the two models, and dynamically selecting the best single prediction. Experiments on both synthetic and real data validate the effectiveness of our proposed algorithm.
    Related Articles | Metrics
    Deep Intelligent Ant Colony Optimization for Solving Travelling Salesman Problem
    Wang Yuan, Chen Ming, Xing Lining, Wu Yahui, Ma Wubin, Zhao Hong
    Journal of Computer Research and Development    2021, 58 (8): 1586-1598.   DOI: 10.7544/issn1000-1239.2021.20210320
    Abstract686)   HTML8)    PDF (3058KB)(540)       Save
    Heuristic algorithms are important methods for solving combinatorial optimization problems since heuristic algorithms can find feasible solutions with reasonable computational consumption. Heuristic design of combinatorial optimization problem is an important research field of combinatorial optimization society. However, designing heuristic algorithms need lots of special domain knowledge and years of trial-and-error, and algorithm performance of manually designed heuristics normally have no guarantee on different problem scenarios. On the other hand, deep learning approaches have the ability of designing heuristics automatically, but they lack the ability of searching in solution space. To overcome these disadvantages, in this article, we propose a hybrid meta-heuristic algorithm framework which is a combination of deep reinforcement learning method and ant colony optimization. In this algorithm, ant colony optimization is benefited from heuristic information learned by deep reinforcement learning method. And the solution searching ability of deep reinforcement learning method is also improved since the ant colony optimization is implemented. To test the algorithm performance, instances with different problem scales selected from TSPLIB are tested. Comparison algorithms include ant based heuristics and reinforcement learning methods. Experimental results show that the deep reinforcement learning method significantly improves both the algorithm proficiency and convergence performance of ant colony optimization algorithm.
    Related Articles | Metrics
    Knowledge Hypergraph Link Prediction Model Based on Tensor Decomposition
    Wang Peiyan, Duan Lei, Guo Zhengshan, Jiang Weipeng, Zhang Yidan
    Journal of Computer Research and Development    2021, 58 (8): 1599-1611.   DOI: 10.7544/issn1000-1239.2021.20210315
    Abstract668)   HTML9)    PDF (1928KB)(548)       Save
    Knowledge hypergraphs contain facts in the real world and provide a structured representation of these facts, but they cannot include all facts. They are highly incomplete. Link prediction approaches aim at inferring missing links based on existing links between entities, so they are widely used in knowledge base completion. At present, most researches focus on the completion of the binary relational knowledge graphs. However, the relations between entities in the real world usually go beyond pairwise associations, that is, there are more than two entities involved in a relation. Compared with knowledge graphs, knowledge hypergraphs can represent these complex n-ary relations in a flexible and natural way. Therefore, we propose a knowledge hypergraph link prediction model based on tensor decomposition, called Typer. It explicitly models the roles of entities in different relations and positions, and decomposes the relations to improve the performance. Meanwhile, considering that promoting the information flow between entities and relations is helpful for learning embeddings of entities and relations, we propose the concept of windows to increase the interaction between entities and relations. In addition, we prove Typer is fully expressive and derive a bound on the dimensionality of its embeddings for full expressivity. We conduct extensive experiments on multiple public real-world knowledge hypergraph datasets. Experiments show that Typer is effective for link prediction in knowledge hypergraphs and achieves better results on all benchmark datasets than other approaches.
    Related Articles | Metrics
    Position-Aware Network Representation Learning via K-Step Mutual Information Estimation
    Chu Xiaokai, Fan Xinxin, Bi Jingping
    Journal of Computer Research and Development    2021, 58 (8): 1612-1623.   DOI: 10.7544/issn1000-1239.2021.20210321
    Abstract449)   HTML4)    PDF (2641KB)(226)       Save
    As the network data grows rapidly and persistently, also affiliated with more sophisticated applications, the network representation learning, which aims to learn the high-quality embedding vectors, has become the popular methodology to perform various network analysis tasks. However, the existing representation learning methods have little power in capturing the positional/locational information of the node. To handle the problem, this paper proposes a novel position-aware network representation learning model by figuring out center-rings mutual information estimation to plant the node’s global position into the embedding, PMI for short. The proposed PMI encourages each node to respectively perceive its K-step neighbors via the maximization of mutual information between this node and its step-specific neighbors. The extensive experiments using four real-world datasets on several representative tasks demonstrate that PMI can learn high-quality embeddings and achieve the best performance compared with other state-of-the-art models. Furthermore, a novel neighbor alignment experiment is additionally provided to verify that the learned embedding can identify its K-step neighbors and capture the positional information indeed to generate appropriate embeddings for various downstream tasks.
    Related Articles | Metrics
    Dynamic Heterogeneous Network Embedding Based on Non-Decreasing Temporal Random Walk
    Guo Jiawen, Bai Qijie, Lin Zhutian, Song Chunyao, Yuan Xiaojie
    Journal of Computer Research and Development    2021, 58 (8): 1624-1641.   DOI: 10.7544/issn1000-1239.2021.20210317
    Abstract384)   HTML9)    PDF (1976KB)(369)       Save
    Network embedding is an important work as a representation learning method for mapping high-dimensional networks to low-dimensional vector spaces. Some researches have been carried out on dynamic homogeneous network embedding and static network embedding. But there are still fewer studies for embedding on dynamic heterogeneous information networks (DHINs). If we directly apply static network embedding methods or dynamic homogeneous network embedding methods to solve the DHIN embedding problem, it will lead to serious information loss due to ignoring the dynamic or heterogeneous properties of the network. Therefore, we propose a DHIN embedding method called TNDE, which is based on time- and category-constrained random walk. The method adopts category constraints to solve the problem of preserving semantic information in DHINs. Moreover, unlike the temporal random walk in other dynamic network embedding methods, TNDE uses non-decreasing temporal constraints to incrementally perform random walk. It can solve the problem that edges on local structures with strong semantics have the same timestamps due to the simultaneous existence of dynamic and heterogeneous properties in DHIN and avoid being trapped in the same timestamps during walking. TNDE provides an efficient online representation learning algorithm by adopting incremental walking and incremental representation learning for real-time changes. Experimental results on three real datasets show that TNDE has good generality in networks with different characteristics and significantly improves embedding quality, which outperforms state-of-the-art methods by 2.4%~92.7% in downstream link prediction and node classification tasks. Moreover, TNDE reduces the algorithm runtime by 12.5%~99.91% with good embedding quality.
    Related Articles | Metrics
    Gene Sequence Representation Learning Based on Virus Transmission Network
    Ma Yang, Liu Zeyi, Liang Xingxing, Cheng Guangquan, Yang Fangjie, Cheng Qing, Liu Zhong
    Journal of Computer Research and Development    2021, 58 (8): 1642-1654.   DOI: 10.7544/issn1000-1239.2021.20210287
    Abstract416)   HTML6)    PDF (1865KB)(275)       Save
    There always exists non-coding and missing sequence in obtained gene sequence data. The existing gene sequence representation methods extract features from high dimension gene sequence mostly through manual process, which usually are computationally expensive. What’s more, the precision of prediction heavily relies on how to utilize the biology background knowledge. In this work, we construct a gene sequence representation method based on graph context information in virus transmission network. After coding the target node’s virus sequence, we use attention mechanism to aggregate the neighbor nodes’ gene sequence information, and thus we can achieve a new representation of the target node’s gene sequence. The gene sequence representation model is optimized based on the fact that the similarity of gene sequence of neighbor nodes is higher than that of non-neighbor nodes. The new representation after being well trained not only extracts the feature of sequence exactly, but also reduces the dimension of gene sequence greatly and improve the computation efficiency. We first train the gene sequence representation model respectively on a simulation transmission network, SARS-CoV-2 and HIV transmission network, and then predict the un-sampled infections in each transmission network. The experimental results show the effectiveness of our model, and its performance is better than other models. What’s more, its success on effectively predicting the un-sampled infections in virus transmission network has a certain practical significance in the epidemiological investigation area.
    Related Articles | Metrics
    Multimodal Adversarial Learning Based Unsupervised Time Series Anomaly Detection
    Huang Xunhua, Zhang Fengbin, Fan Haoyi, Xi Liang
    Journal of Computer Research and Development    2021, 58 (8): 1655-1667.   DOI: 10.7544/issn1000-1239.2021.20201037
    Abstract965)   HTML19)    PDF (2112KB)(670)       Save
    Time series anomaly detection is one of the most important research directions in machine learning, which aims to find the patterns that deviate significantly from the normal behavior of time series. However, most of the existing methods for anomaly detection of time series are based on single-modality feature learning, which ignores the relevance and complementarity of the characteristic distribution of time series in multi-modality space, and consequently fails to make full use of the existing information for learning. To alleviate the above problems, in this paper, we present a time series anomaly detection model based on multimodal adversarial learning. Firstly, we convert the original time series into the frequency domain to construct multi-modality time series representation. Then, based on the constructed multi-modality representation, we propose a multimodal generated adversarial network model to learn normal data’s distributions in time domain and frequency domain jointly. Finally, by modeling the anomaly detection problem as the data reconstruction problem in time domain and frequency domain, we measure the anomaly score of time series from both the time domain and frequency domain perspectives. We verify the proposed method on the time series data sets of UCR and MIT-BIH. Experimental results on the 6 data sets of UCR and MIT-BIH show that, compared with the state-of-the-arts, the proposed method improves the AUC and AP metrics of anomaly detection performance by 12.50% and 21.59% respectively.
    Related Articles | Metrics
    Software Vulnerability Detection Method Based on Code Property Graph and Bi-GRU
    Xiao Tianming, Guan Jianbo, Jian Songlei, Ren Yi, Zhang Jianfeng, Li Bao
    Journal of Computer Research and Development    2021, 58 (8): 1668-1685.   DOI: 10.7544/issn1000-1239.2021.20210297
    Abstract589)   HTML9)    PDF (2352KB)(361)       Save
    For large-scale and complex software nowadays, the forms of vulnerability code tend to be more diversified. Traditional vulnerability detection methods can not meet the requirements of diverse vulnerabilities because of their high degree of human participation and weak ability of unknown vulnerability detection. In order to improve the detection effect of unknown vulnerability, a large number of machine learning methods have been applied to the field of software vulnerability detection. Due to the high loss of syntax and semantic information in code representation, the false positive rate and false negative rate are high. To solve this issue, a software vulnerability detection method based on code property graph and Bi-GRU is proposed. This method extracts the abstract syntax tree sequence and the control flow graph sequence from the code property graph of the function as the representation method of the function representation. The representation method can reduce the loss of information in the code representation. At the same time, the method selects Bi-GRU to build feature extraction model. It can improve the feature extraction ability of vulnerability code. Experimental results show that, compared with the method represented by abstract syntax tree, this method can improve the accuracy and recall by 35% and 22%. It can improve the vulnerability detection effect of real data set for multiple software source code mixing, and effectively reduce the false positive rate and false negative rate.
    Related Articles | Metrics
    Butterfly Species Identification from Natural Environment Based on Improved RetinaNet
    Xie Juanying, Lu Yinyuan, Kong Weixuan, Xu Shengquan
    Journal of Computer Research and Development    2021, 58 (8): 1686-1704.   DOI: 10.7544/issn1000-1239.2021.20210283
    Abstract606)   HTML12)    PDF (7875KB)(316)       Save
    Butterfly is a kind of insects that are sensitive to the habitat. The distribution of butterfly species in natural environment reflects the balance of regional ecosystem and the biodiversity of the region. To identify the species of butterflies manually is a heavy time consuming work for experts. Computer vision technology makes it possible to automatically identify butterfly species. This paper focuses on identifying the butterfly species via images taken in natural environment. This is a very challenging task because the butterfly wings in the images are always folded and the features identifying the butterfly species cannot be seen. Therefore two new attention mechanisms, referred to as DSEA (direct squeeze-and-excitation with global average pooling) and DSEM (direct squeeze-and-excitation with global max pooling), are proposed in this paper to advance the classical object detection algorithm RetinaNet. And the deformable convolution is simultaneously introduced to enhance the power of RetinaNet to simulate the butterfly deformation in images from natural environment, so as to realize the automatic butterfly species identification task according to the features of butterfly images from natural environment. The very famous criterion mAP (mean average precision) for target detection is taken to value the proposed model, and the visualization is adopted to investigate the primary factors influencing the performance of the predictive model. Extensive experiments demonstrate that the improved RetinaNet is valid in identifying the butterfly species from images taken in the natural environment, especially the RetinaNet embedded with DSEM module. The balanced data can improve the generalization of the predictive model, and the structural dissimilarity of samples is a key factor affecting the performance of the predictive model.
    Related Articles | Metrics
    Deep Interactive Image Segmentation Based on Fusion Multi-Scale Annotation Information
    Ding Zongyuan, Sun Quansen, Wang Tao, Wang Hongyuan
    Journal of Computer Research and Development    2021, 58 (8): 1705-1717.   DOI: 10.7544/issn1000-1239.2021.20210195
    Abstract613)   HTML8)    PDF (2816KB)(340)       Save
    Existing deep interactive image segmentation algorithms calculate distance maps or Gaussian maps for the click-annotations, and then concatenate them with the image as the input of network. The influence range of each click is the same, but the purpose of each interaction is different. The main role of the early interaction is selection, and the later prefers fine-tuning. To this end, a deep interactive image segmentation algorithm fused with multi-scale label information is proposed. First, by setting different Gaussian radius, two groups of Gaussian maps with different scales are calculated for each click. Secondly, by fusing with the small scale Gaussian maps, and some down-sampling modules in the basic segmentation network are removed, hence richer detailed features of targets are extracted. At the same time, in order to maintain the integrity of the target segmentation results, a non-local feature attention module is proposed and this module fuses large scale Gaussian maps. Finally, according to the probability information provided by the Gaussian map, a probability click loss is proposed to enhance the segmentation performance of the target near the click point. Experimental results show that the proposed algorithm can not only maintain the integrity of the segmentation, but also obtain the segmentation results of the target details, which greatly reduces the user’s interaction burden.
    Related Articles | Metrics
    Parallel Attention Based UNet for Crack Detection
    Liu Fan, Wang Junfeng, Chen Zhiyu, Xu Feng
    Journal of Computer Research and Development    2021, 58 (8): 1718-1726.   DOI: 10.7544/issn1000-1239.2021.20210335
    Abstract578)   HTML9)    PDF (2284KB)(521)       Save
    Cracks have hidden safety hazards to public facilities, so crack detection is essential for the maintenance of public facilities. Due to the interference of noise, light, shadow, and other factors in the crack images, the neural network is easily affected during the training process, which causes deviations in the prediction results and reduces the prediction effect. To suppress these disturbances, a parallel attention mechanism is designed and then the parallel attention based UNet(PA-UNet) is proposed by embedding this attention mechanism into UNet. The parallel attention mechanism increases the weights of crack features from the two dimensions of channel and space to suppress interference, then fuses the features generated by these two dimensions to obtain more complementary crack features. To verify the effectiveness of the proposed method, we have conducted experiments on four data sets. Experimental results show that our method outperforms the existing popular methods. Meanwhile, to demonstrate the effectiveness of the parallel attention mechanism, we conduct a comparative experiment with other four attention mechanisms. The results show that the parallel attention mechanism performs better than others.
    Related Articles | Metrics
    Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing
    Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling
    Journal of Computer Research and Development    2021, 58 (8): 1727-1750.   DOI: 10.7544/issn1000-1239.2021.20210304
    Abstract1091)   HTML18)    PDF (1997KB)(1029)       Save
    With the rapid development of artificial intelligence, deep neural networks have been widely applied in the fields of computer vision, signal analysis, and natural language processing. It helps machines process understand and use human language through functions such as syntax analysis, semantic analysis, and text comprehension. However, existing studies have shown that deep models are vulnerable to the attacks from adversarial texts. Adding imperceptible adversarial perturbations to normal texts, natural language processing models can make wrong predictions. To improve the robustness of the natural language processing model, defense-related researches have also developed in recent years. Based on the existing researches, we comprehensively detail related works in the field of adversarial attacks, defenses, and robustness analysis in natural language processing tasks. Specifically, we first introduce the research tasks and related natural language processing models. Then, attack and defense approaches are stated separately. The certified robustness analysis and benchmark datasets of natural language processing models are further investigated and a detailed introduction of natural language processing application platforms and toolkits is provided. Finally, we summarize the development direction of research on attacks and defenses in the future.
    Related Articles | Metrics
    Siamese BERT-Networks Based Classification Mapping of Scientific and Technological Literature
    He Xianmin, Li Maoxi, He Yanqing
    Journal of Computer Research and Development    2021, 58 (8): 1751-1760.   DOI: 10.7544/issn1000-1239.2021.20210323
    Abstract453)   HTML3)    PDF (1592KB)(331)       Save
    International patent classification (IPC) and Chinese library classification (CLC), as important classification marks, play an important role in the organization and management of patent information and journal literature respectively. How to accurately establish the mapping relationship between two classifications is of great significance to the realization of cross-browsing and retrieval of patent information and journal resources. In the paper, a siamese network based on BERT pre-training contextual language model is proposed to establish the mapping relationship between IPC and CLC. A siamese network model is used to abstract the description texts of two classification categories respectively, and the sentence vectors of the same dimension are calculated by average pooling the word representation after abstraction, and the similarity score between sentences is calculated based on cosine similarity to complete classification mapping. The mapping corpus between IPC category and CLC category is manually annotated. The experimental results on the corpus show that the proposed method is significantly better than the rule-based method and other deep neural network methods, such as Sia-Multi, Bi-TextCNN, Bi-LSTM etc. The relevant code, models, and manual annotation corpus are publicly released.
    Related Articles | Metrics
    Pretraining Financial Language Model with Multi-Task Learning for Financial Text Mining
    Liu Zhuang, Liu Chang, Wayne Lin, Zhao Jun
    Journal of Computer Research and Development    2021, 58 (8): 1761-1772.   DOI: 10.7544/issn1000-1239.2021.20210298
    Abstract671)   HTML12)    PDF (1485KB)(735)       Save
    Financial text mining is becoming increasingly important as the number of financial documents rapidly grows. With the progress in machine learning, extracting valuable information from financial literature has gained attention among researchers, and deep learning has boosted the development of effective financial text mining models. However, as deep learning models require a large amount of labeled training data, applying deep learning to financial text mining is often unsuccessful due to the lack of training data in financial fields. Recent researches on training contextualized language representation models on text corpora shed light on the possibility of leveraging a large number of unlabeled financial text corpora. We introduce F-BERT (BERT for financial text mining), which is a domain specific language representation model pre-trained on large-scale financial corpora. Based on the BERT architecture, F-BERT effectively transfers the knowledge from a large amount of financial texts to financial text mining models with minimal task-specific architecture modifications. The results show that our F-BERT outperforms most current state-of-the-art models, which demonstrates the effectiveness and robustness of the proposed F-BERT.
    Related Articles | Metrics