ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 May 2018, Volume 55 Issue 5
Dynamic Fuzzy Features Selection Based on Variable Weight
Wang Ling,Meng Jianyao
2018, 55(5):  893-907.  doi:10.7544/issn1000-1239.2018.20170503
Asbtract ( 1312 )   HTML ( 99)   PDF (3893KB) ( 729 )  
Related Articles | Metrics
In this paper, a new scheme for dynamic fuzzy feature selection based on variable weight is proposed to optimize the fuzzy feature subset with the important features dynamically. Firstly, the sliding window is adopted to divide the fuzzy dataset. In the first sliding window, the off-line fuzzy features selection algorithm is proposed to access the candidate fuzzy feature subset by calculating the weight of each fuzzy input feature according to the mutual information between the fuzzy input features and the output feature. Based on this, the optimal fuzzy feature subset are obtained by combining the backward feature selection method with the fuzzy feature selection index. With the new sliding window, the on-line fuzzy features selection algorithm is proposed, by integrating the optimal fuzzy feature selection result in the previous sliding window with the candidate fuzzy feature set in the current sliding window, the importance of the fuzzy input feature is calculated to obtain the optimal feature subset in the current window. Finally, the evolving relationship of the fuzzy input features is found with the fuzzy feature weights between the sliding windows. The simulation results show that the proposed algorithm has a significant improvement in the adaptability and prediction accuracy.
The Algorithm of Ship Rule Path Extraction Based on the Grid Heat Value
Li Jianjiang, Chen Wei, Li Ming, Zhang Kai,Liu Yajun
2018, 55(5):  908-919.  doi:10.7544/issn1000-1239.2018.20170226
Asbtract ( 1183 )   HTML ( 3)   PDF (3427KB) ( 583 )  
Related Articles | Metrics
With the development of moving target location technology like GPS, wireless sensor and satellite, a large amount of mobile data such as human walking trajectory, vehicle trajectory, ship trajectory and so on is generated. However, moving target detection device can only store information of a series of discrete points. Therefore, using the aid of discrete points to track and recover the full path is the necessary prerequisite to grasp the rule of moving target. Using the method of date mining can find the regular path from the historical information of moving target, while the clustering method based on the grid can not only effectively express these trajectories, but also analyze the relationship among these points, and it is an effective method for extraction of path. At present, the research of trajectory clustering is mostly from the perspective of space or time,by means of density clustering method to find out hot paths. These paths are often the discrete path fragments, which are not able to effectively express the continuous path of moving target with different shapes. In this paper, the method of heat factor similarity measurement based on the combination of distance and density of grid heat value is proposed. Finally, the actual automatic identification system (AIS) dynamic data is used to verify the accuracy and performance of the algorithm. The algorithm analysis and experimental results show that the regular path extraction algorithm based on grid heat value proposed in this paper can effectively find out different trajectory sequences of different shapes.
A Recommendation Engine for Travel Products Based on Topic Sequential Patterns
Zhu Guixiang,Cao Jie
2018, 55(5):  920-932.  doi:10.7544/issn1000-1239.2018.20160926
Asbtract ( 1265 )   HTML ( 6)   PDF (4422KB) ( 687 )  
Related Articles | Metrics
Travel products recommendation has become one of emerging issues in the realm of recommendation systems. The widely-used collaborative filtering algorithms are usually difficult to be used for recommending travel products due to a number of reasons, including: 1) the content of travel products is very complex, 2) the user-item matrix is extremely sparse, and 3) the cold-start users are widely existing. To tackle these issues, we try to exploit Web server logs for generating recommenda-tion, and present a novel recommendation engine (SECT for short) for travel products based on topic sequential patterns. In detail, we first extract topics from semantic description of every Web page. Then, we mine topic frequent sequential patterns and their target products to form click patterns library. At last, we propose a Markov n-gram model for matching the real-time click-stream of users with the click patterns library and thus computing recommendation scores. To enhance the efficiency of online computing, we design a new multi-branch tree data structures called PSC-tree to store the historical click patterns library and integrate with online computing module seamlessly. Experimental results on a real-world travel dataset demonstrate that the SECT prevails over the state-of-art baseline algorithms. In particular, SECT shows merits in improving both the coverage and accuracy for recommending products to cold-start users. Also, SECT is effective to recommend long tail items and outperform baseline algorithms.
Multi-Source Emotion Tagging for Online News Comments Using Bi-Directional Hierarchical Semantic Representation Model
Zhang Ying, Wang Chao, Guo Wenya,Yuan Xiaojie
2018, 55(5):  933-944.  doi:10.7544/issn1000-1239.2018.20160947
Asbtract ( 987 )   HTML ( 2)   PDF (2493KB) ( 535 )  
Related Articles | Metrics
With the rapid growth of news services, users can now actively respond to online news by expressing subjective emotions. Such emotions can help understand the preferences and perspectives of individual users, and thus may facilitate online publishers to provide users with more relevant services. Research on emotion tagging has obtained promising achievement, but there are still some problems: Firstly, traditional methods regard a document as a flow or bag of words, which cannot extract the logical relationship features among sentences appropriately. Therefore, these methods cannot express the semantic of the document properly when there exists logical relationship among the sentences in the document. Secondly, these methods use only the semantic of the document itself, while ignoring the accompanying information sources, which can significantly influence the interpretation of the sentiment contained in documents. In order to solve these problems, this paper proposes a hierarchical semantic representation model of news comments using multiple information sources, called bi-directional hierarchical semantic neural network (Bi-HSNN), which not only captures the sentiment among words in sentences, but also applies a bottom-up way to learn the logical relationship among sentences in the document. This paper tackles the task of emotion tagging on comments of online news by exploiting multiple information sources including comments, news articles, and the user-generated emotion votes. A series of experiments on real-world datasets have demonstrated the effectiveness of the proposed model.
Chinese Micro-Blog Sentiment Analysis Based on Multi-Channels Convolutional Neural Networks
Chen Ke, Liang Bin, Ke Wende, Xu Bo,Zeng Guochao
2018, 55(5):  945-957.  doi:10.7544/issn1000-1239.2018.20170049
Asbtract ( 1985 )   HTML ( 35)   PDF (2794KB) ( 1301 )  
Related Articles | Metrics
Neural network-based architectures have been pervasively applied to sentiment analysis and achieved great success in recent years. However, most previous approaches usually classified with word feature only, which ignoring some characteristic features on the task of sentiment classification. One of the remaining challenges is to leverage the sentiment resources effectively because of the lack of length of Chinese micro-blog texts. To address this problem, we propose a novel sentiment classification method for Chinese micro-blog sentiment analysis based on multi-channels convolutional neural networks (MCCNN) to capture the characteristic information in micro-blog texts. With the help of the part of speech vector, the model could promote the full use of sentiment features through different part of speech tagging. Meanwhile, the position vector helps the model indicate the degree of importance of every word in the sentence, which impels the model to focus on the important words in the training process. Afterwards, a multi-channels architecture based on convolutional neural networks will be used to learn more feature information of micro-blog texts, and extract more hidden information through combining different vectors and original word embedding. Finally, the experiments on COAE2014 dataset and micro-blog dataset reveal better performance than the current main stream convolutional neural networks and traditional classifier.
Rough Set Knowledge Discovery Based Open Domain Chinese Question Answering Retrieval
Han Zhao, Miao Duoqian, Ren Fuji,Zhang Hongyun
2018, 55(5):  958-967.  doi:10.7544/issn1000-1239.2018.20170232
Asbtract ( 1203 )   HTML ( 7)   PDF (1830KB) ( 601 )  
Related Articles | Metrics
In the information retrieval (IR) based open domain question answering system (QA system), the main principle is that first use the semantic tools and knowledgebase to get the semantic and knowledge information, then calculate the matching value of both semantic and knowledge. However, in some practical applications of Chinese question answering, because of the uncertainty of both the Chinese language representation and the Chinese knowledge representation, the current methods are not very effective. To solve this problem, a rough set knowledge discovery based Chinese question answering method is proposed in this paper. It uses the method of rough set equivalence partitioning to represent the rough set knowledge of the QA pairs, then uses the idea of attribute reduction to mine out the upper approximation representations of all the knowledge items. Based on the rough set QA knowledgebase, the knowledge match value of a QA pair can be calculated as a kind of knowledge item similarity. After all the knowledge similarities of one question and its answer candidates are given, the final matching values which combines rough set knowledge similarity with traditional sentence similarity can be used to rank the answer candidates. The experiment shows that the proposed method can improve the MAP and MRR compared with the baseline information retrieval methods.
A Collaborative Filtering Recommendation Algorithm Based on Information of Community Experts
Zhang Kaihan, Liang Jiye, Zhao Xingwang,Wang Zhiqiang
2018, 55(5):  968-976.  doi:10.7544/issn1000-1239.2018.20170253
Asbtract ( 1436 )   HTML ( 11)   PDF (1530KB) ( 616 )  
Related Articles | Metrics
Collaborative filtering recommendation algorithm has been widely used because it is not limited by the knowledge in a specific domain and easy to implement. However, it is faced with the problem of several issues such as data sparsity, extensibility and cold start which affect the effectiveness of the recommendation algorithm in some practical application scenarios. To address the user cold start problem, by merging social trust information (i.e., trusted neighbors explicitly specified by users) and rating information, a collaborative filtering recommendation algorithm based on information of community experts is proposed in this paper. First of all, users are divided into different communities based on their social relations. Then, experts in each community are identified according to some criteria. In addition, in order to alleviate the impact of the data sparsity, ratings of an expert’s trusted neighbors are merged to complement the ratings of the expert. Finally, the prediction for a given item is generated by aggregating the ratings of experts in the community of the target user. Experimental results based on two real-world data sets FilmTrust and Epinions show the proposed algorithm is able to alleviate the user cold start problem and superior to other algorithms in terms of MAE and RMSE.
A Preference Prediction Method Based on the Optimization of Basic Similarity Space Distribution
Gao Ling, Gao Quanli, Wang Hai, Wang Wei,Yang Kang
2018, 55(5):  977-985.  doi:10.7544/issn1000-1239.2018.20160924
Asbtract ( 934 )   HTML ( 2)   PDF (2635KB) ( 439 )  
Related Articles | Metrics
The similarity measure methods of preference behavior in the existing collaborative filtering based recommender systems are unable to acquire the real nearest neighbors, which have influenced the prediction accuracy. To solve this problem, an users’ preference prediction method based on the optimization of basic similarity space distribution is proposed. In the beginning, this method uses cosine similarity, constrained cosine similarity and Pearson correlation coefficient to get the original similarities among users. Secondly, it generates the preference center based on the distribution characteristic of users’ preference similarity, and then it get the average similarity range based on the behavior distance between other preference behavior and preference center to build the basic similarity space. Finally, the method generates the modified model based on average nearest neighbors and abnormal ratings to optimize the basic similarity space, and basing on which generate predictions for users. The authors present empirical experiments by using a real extensive data set. Experimental results show that the proposed method can achieve lower MAE about 12.8% and 9.7% compared with WSCF and OTCF, and the coverage rate is increased about 5.79% and 3.83%, and the diversity is the same with WSCF and is increased about 4.3% compared with OTCF, which indicates that the proposed method can efficiently improve recommendation quality.
Selective Ensemble Classification Integrated with Affinity Propagation Clustering
Meng Jun, Zhang Jing, Jiang Dingling, He Xinyu,Li Lishuang
2018, 55(5):  986-993.  doi:10.7544/issn1000-1239.2018.20170077
Asbtract ( 1136 )   HTML ( 5)   PDF (1253KB) ( 495 )  
Related Articles | Metrics
Mining useful knowledge from gene expression data is a hot research topic and direction of bioinformatics. Gene microarray data are characterized by high dimensionality, small sample size and high redundancy. Therefore, a gene selection method based on the intersection neighborhood rough set is presented to select important genes for the classification of microarray data. First, pathway knowledge is used to preselect genes, and each pathway unit is corresponding to a gene subset. Then the attribute reduction method based on rough set is applied to select important genes without redundancy for classification. Due to the large number of pathway knowledge units, many base classifiers are generated. In order to further improve the diversity among base classifiers and the efficiency of ensemble model, it is necessary to select part of base classifiers. Affinity propagation (AP) clustering needn’t to set the number of clusters and the starting points, and it can obtain clusters more quickly and accurately. Therefore, AP clustering algorithm is used to group base classifiers into many clusters with significant diversity among them, then selecting a classifier from each cluster to generate the final ensemble classifier. Experimental results on three Arabidopsis thaliana biotic and abiotic stress response datasets show that the proposed method can improve the accuracy compared with the existing ensemble methods by more than 12%.
An Efficient Searchable Encryption Scheme with Designed Tester and Revocable Proxy Re-Encryption
Xu Qian, Tan Chengxiang, Fan Zhijie, Feng Jun, Zhu Wenye,Xiao Ya
2018, 55(5):  994-1013.  doi:10.7544/issn1000-1239.2018.20161051
Asbtract ( 1191 )   HTML ( 7)   PDF (3264KB) ( 585 )  
Related Articles | Metrics
Hidden vector encryption (HVE) is a notable case of predicate encryption that enables the fine-grained control on the decryption key and supports the conjunctive keyword search and range queries on encrypted data. Such a technology can play an important role in the electronic health record (EHR) system since it incorporates the security protection and the convenience searchable functions on the sensitive medical records. However, all the existing HVE schemes cannot provide designed tester and automatically delegation function while requiring a low communication and computation overhead. In this paper, an efficient HVE scheme with designed tester and timing controlled proxy re-encryption is proposed. The delegatee can perform search operation on the re-encryption ciphertext during a certain period of time specified by the delegator, and the search authority can be revoked automatically after the effective time period. Since only the designed tester can test whether the given query tokens match the ciphertext, the proposed scheme can also resist the off-line keyword guessing (KG) attack. Moreover, our scheme is proved secure against chosen keyword and chosen time attack in the standard model and maintains a relatively low asymptotic complexity because it only requires a token size of O(1) and O(1) bilinear pairing computations in the test process.
Self-Adaptive Decision Making Under Uncertainty in Environment and Requirements
Yang Zhuoqun,Jin Zhi
2018, 55(5):  1014-1033.  doi:10.7544/issn1000-1239.2018.20161039
Asbtract ( 1216 )   HTML ( 9)   PDF (9408KB) ( 432 )  
Related Articles | Metrics
Software systems intensively interact with other software/hardware systems, devices and users. The operation environment of software becomes unstable and software requirements may also change. Due to the fact that it is hard to predict the environment and requirements at runtime, their changes become uncertain. For providing continuous service, software systems need to adjust themselves according to changes in the environment and themselves. Uncertainties bring great challenges to the adaptation process. Existing related efforts either target at modeling the effects on requirements caused by environmental changes, or focus on how to adjust software behaviors to satisfy fixed requirements under changing environment. With these approaches, it is difficult to deal with the variability and complexity in the adaptation process when requirements are uncertain. This paper proposes a fuzzy control based adaptation decision-making approach, to tackling environment and requirements uncertainties at runtime. It applies fuzzy logic to model and specify variables existing in the environment and software, and generates reasoning rules between variables; designs the adaptation mechanism based on the feedforward-feedback control structure and fuzzy controllers; implements decision-making through fuzzy inference and genetic algorithm. The adaptation results under different environment and constraints show that software can achieve the optimal decision with the adaptation mechanism and algorithms. The feasibility and effectiveness of the approach are illustrated through a mobile bitcoin-miner system.
A Multimedia File Cloud Storage System to Support Data Deduplication and Logical Expansion
Wang Shuai, Lü Jianghua, Wang Ronghe, Wu Jifang,Ma Shilong
2018, 55(5):  1034-1048.  doi:10.7544/issn1000-1239.2018.20160853
Asbtract ( 982 )   HTML ( 12)   PDF (4095KB) ( 377 )  
Related Articles | Metrics
With the development of the Internet, the scene of storing multimedia files is increasing and cloud storage system has become the focus of the Internet field. Many cloud storage systems provide applications with data storage, query and computing service. A lot of applications have a large number of small-sized and duplicate multimedia files. Traditional distributed file systems are not suitable for storing and accessing multimedia files. Because in these systems, multimedia files are usually divided into several blocks which are stored on many data servers. Every time when applications need to get the file content, these file systems need reconstructing. This strategy can cause the problem of consuming more resources when users access the multimedia files. In order to let these applications with high data redundancy store multimedia files efficiently at a low cost, we propose a model of distributed directory tree to describe the logic structure of directories in the data center. Afterward, we design and implement a distributed multimedia file system named MFCSS which can support data deduplication and the logical expansion for directories. The experiment results show that the system not only has good performance when it saves multimedia files with higher redundancy but also can effectively improve the efficiency of disk storage. Moreover, the MFCSS system has good scalability and can simplify the process of managing multimedia files stored in distributed storage environment for applications.
MH-RLE: A Compression Algorithm for Dynamic Reconfigurable System Configuration Files Based on Run-Length Coding
Wu Weiguo, Wang Chaohui, Wang Jinyu, Nie Shiqiang,Hu Zhuang
2018, 55(5):  1049-1064.  doi:10.7544/issn1000-1239.2018.20170015
Asbtract ( 1052 )   HTML ( 6)   PDF (4640KB) ( 563 )  
Related Articles | Metrics
With the enhancing of integrated circuit technology, the scale of FPGA on-chip resources has increased dramatically, and the quantity of FPGA reconfigurable resources are rising. At the same time, corresponding increasing of the configuration file size and the configuration of reconfigurable system take too long time, which seriously hinder the extension of dynamic reconfigurable system in real-time applications. In order to solve this problem, the main solution at present is to compress the configuration file. We use upper computers to compress the configuration file firstly, and then use configuration circuits to decompress on-chip to reduce the size of the configuration file. In this paper, we propose an algorithm named MH-RLE for the compression of dynamic reconfigurable system configuration files. This algorithm is based on the characteristics distribution of “0” and “1” in a FPGA application binary configuration files. Firstly, the RLE fixed-length compression method is used to compress the configuration file. Secondly, we use the Huffman coding to solve zero placeholders of counters in the RLE fixed-length compression method. Finally, to further enhance the compression rate, we design a bitmask-based function to recompress. Simulation results show that the average compression rate of MH-RLE is 49.82% and comparing with 6 kinds of compression methods, MH-RLE is able to reduce to 12.4%.
FTL Address Mapping Method Based on Mapping Entry Inter-Reference Recency
Zhou Quanbiao, Zhang Xingjun, Liang Ningjing, Huo Wenjie,Dong Xiaoshe
2018, 55(5):  1065-1077.  doi:10.7544/issn1000-1239.2018.20170254
Asbtract ( 1258 )   HTML ( 5)   PDF (2893KB) ( 376 )  
Related Articles | Metrics
Demand-based flash translation layer(DFTL), which is a classical FTL address mapping method, solves the contradiction between large amounts of mapping information and limited cache capacity by only caching address mappings least recently used and leaving global mappings in flash memory. However, DFTL does not take full advantage of the spatial locality of workloads. When the cache is invalidated, dirty mapping entries will be swapped out frequently, causing lots of write operations of mapping pages. In addition, DFTL can’t address the problem of write amplification caused by valid page migration operations during garbage collection. In this paper, we propose a novel FTL address mapping method named IRR-FTL, which is based on inter-reference recency (IRR) of mapping entries. Firstly, IRR-FTL makes the most of the spatial locality of workloads by setting cache slots for translation pages. Secondly, IRR-FTL can make workloads adaptively write cache mapping table partitions based on IRR of mapping entries, which can reduce write operations of translation pages. Finally, IRR-FTL achieves hot and cold data separation, which can improve garbage collection efficiency. Compared with DFTL, our experimental results with a variety of workloads show that IRR-FTL can increase cache hit rate, average response time and erase counts by 29.1%, 27.3% and 10.7%, respectively.
Multiple Object Saliency Detection Based on Graph and Sparse Principal Component Analysis
Liang Dachuan, Li Jing, Liu Sai,Li Dongmin
2018, 55(5):  1078-1089.  doi:10.7544/issn1000-1239.2018.20160681
Asbtract ( 1126 )   HTML ( 7)   PDF (5241KB) ( 560 )  
Related Articles | Metrics
In order to detect multiple salient objects from the image with cluttered background, a new multi-object salient detection method based on fully connected graph and sparse principal component analysis is proposed. Firstly, a rapid coarse detection method with different scales is adopted to obtain the object prior with the location of candidate objects and the pixel level saliency map. Meanwhile, we construct a fully connected graph based on the superpixel segmentation to obtain the superpixel-level saliency map. The salient regions are extracted from the superpixel-level binarized salient object prior map and a sparse principal component analysis method is used to gain the main features vector from the pixel matrix composed of the pixels in the optimized salient regions and obtain the salient map of corresponding scale. Finally, the final salient map is fused with the multi-scale saliency maps. Our method takes the advantage of pixel and superpixel method, it can not only simplify the calculation but also improve the detection precision of the salient objects in the image. Quantitative experiments on two public datasets SED2 and HKU_IS demonstrate that out method can detect multiple salient objects from complex images and outperforms other state-of-the-art methods.
A Sparse Signal Reconstruction Algorithm Based on Approximate l\-0 Norm
Nie Dongdong,Gong Yaoling
2018, 55(5):  1090-1096.  doi:10.7544/issn1000-1239.2018.20160829
Asbtract ( 1015 )   HTML ( 8)   PDF (1752KB) ( 476 )  
Related Articles | Metrics
The signal reconstruction algorithm is the key to compressed sensing. Signal reconstruction based on approximate l\-0 norm chooses a continuous function to estimate l\-0 norm, thus the minimization problem of l\-0 norm is transformed into an optimization problem of a smooth function. It is critical for the signal reconstruction algorithm to select the appropriate smooth function and optimization algorithm. To improve the accuracy of the sparse signal recovered in the compression sense, the sum of a simple fractional function is proposed to approximate l\-0 norm on the basis of previous work in the paper. Then the sparse solution of an unconstrained optimization problem of the function is solved by Newton iterative algorithm, which effectively integrated the advantages of the fast convergence of approximate l\-0 norm algorithm and the high precision of Newton iteration algorithm. Thus, the minimization of l\-0 norm is approximated smoothly and efficiently within less time. The performance of the proposed algorithm is tested and compared with some existing similar algorithms in the case of different compression ratio, sparseness and noise levels in the simulation experiments. Simulation results show that the performance of the proposed algorithm is better than the existing similar algorithms, and the precision of reconstructed signal is greatly improved, which improves the signal recovery quality in compressed sensing effectively under the same conditions.
Circuit Design of Convolutional Neural Network Based on Memristor Crossbar Arrays
Hu Fei, You Zhiqiang, Liu Peng,Kuang Jishun
2018, 55(5):  1097-1107.  doi:10.7544/issn1000-1239.2018.20170107
Asbtract ( 1739 )   HTML ( 8)   PDF (3608KB) ( 714 )  
Related Articles | Metrics
Memristor crossbar array has caused wide attention due to its excellent performance in neuromorphic computing. In this paper, we design a circuit to realize a convolutional neural network (CNN) using memristors and CMOS devices. Firstly, we improve a memristor crossbar array that can store weights and bias accurately. A dot product between two vectors can be calculated after introducing an appropriate encoding scheme. The improved memristor crossbar array is employed for convolution and pooling operations, and a classifier in a CNN. Secondly, we also design a memristive CNN architecture using the improved memristor crossbar array and based on the high fault-tolerance of CNNs to perform a basic CNN algorithm. In the designed architecture, the analog results of convolution operations are sampled and held before a pooling operation rather than using analog digital converters and digital analog converters between convolution and pooling operations in a previous architecture. Experimental results show the designed circuit with the area of 0.8525cm\+2 can achieve a speedup of 1770×compared with a GPU platform. Compared with previous memristor-based architecture with a similar area, our design is 7.7×faster. The average recognition errors performed on the designed circuit are only 0.039% and 0.012% lost than those of software implementation in the cases of a memristor with 6-bit and 8-bit storage capacities, respectively.
Effects of Three Factors Under BTI on the Soft Error Rate of Integrated Circuits
Wang Zhen, Jiang Jianhui, Chen Naijin, Lu Guangming,Zhang Ying
2018, 55(5):  1108-1116.  doi:10.7544/issn1000-1239.2018.20170094
Asbtract ( 1160 )   HTML ( 1)   PDF (2025KB) ( 411 )  
Related Articles | Metrics
In the nanoscale era, the integrated circuit reliability issues caused by both aging mechanism and soft error become very critical. However, there are few researches on combining several factors to analyze the impact of aging mechanism on soft error rate (SER). As a typical aging mechanism, bias temperature instability (BTI) includes negative BTI (NBTI) in PMOS transistors and positive BTI (PBTI) in NMOS transistors. Most of current works focus on single factor affected by NBTI. Based on the research of the effect of gate delay under BTI on SER, the impacts of single event transient (SET) pulse width and critical charge are studied. Firstly, under BTI effect, the variation model of SET pulse width in 32nm technology is completed by considering PBTI; then how to consider SET pulse width and critical charge in the SER calculation is explored, and the variation of SET pulse width can be reflected by that of injected charge during SER estimation is proposed. Based on HSPICE simulations and C++ experiments, it shows that among three factors, delay and SET pulse width have little influence. As a conclusion, the critical charge is a key factor, and SER increases under BTI effect, while the effect is greatest after one year and slows down later.