ISSN 1000-1239 CN 11-1777/TP

Most cited articles

    Baidu Google scholar CSCD Crossref WebOfScience Sciencedirect
    Published within: In last 1 yearsIn last 2 yearsIn last 3 yearsAll

    Condition: Baidu + All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Grid & P2P Trust Model Based on Recommendation Evidence Reasoning
    Zhu Junmao, Yang Shoubao, Fan Jianping, and Chen Mingyu
    null   
    Abstract793)   HTML35)    PDF (412KB)(605)       Save
    Under mixed computing environment of grid and P2P(Grid & P2P), grid nodes provide the service with QoS guarantee. However, sharing computing resources of P2P no des is the user's volunteer action without QoS guarantee. The user is not respon sible for his actions. Therefore it's difficult to establish the trust relations hip among users with traditional trust mechanism. Referring to social people tru st relationship models, a grid & P2P trust model based on recommendation evidenc e reasoning is designed to solve the problem by building a recommendation mechan ism in Grid & P2P and integrating the recommendation evidence with the D-S theor y. Theoretical analysis and simulations prove that the model can tackle the trus t problem under Grid & P2P in a simple and efficient way.
    Related Articles | Metrics
    Cited: Baidu(71)
    Multicast Scheduling in Buffered Crossbar Switches with Multiple Input Queues
    Sun Shutao, He Simin, Zheng Yanfeng, and Gao Wen,
    null   
    Abstract776)   HTML2)    PDF (489KB)(441)       Save
    The scheduling of multicast traffic in bufferless crossbar switches has been extensively investigated. However, all the proposed solutions are hardly practical for high capacity switches because of either poor performance or high complexity. A buffered crossbar switch with multiple input queues per input port for transferring multicast traffic is proposed. Under this architecture, the scheduler operates in three stages, namely cell assignment, input scheduling, and output scheduling. The scheduling algorithms with complexity from O(1) to higher are presented for different scheduling stages. Simulation results show that both the number of input queues and the size of crosspoint buffer can affect the throughput performance of a buffered crossbar under multicast traffic. However, under bursty multicast traffic, increasing the number of input queues gains more, no matter which algorithm is used, i.e. either HA-RR-RR with complexity O(1) or MMA-MRSF-LQF with higher complexity. This shows that the proposed scheme is more appropriate for high performance switches.
    Related Articles | Metrics
    Cited: Baidu(40)
    Improved Molecular Solutions for the Knapsack Problem on DNA-Based Supercomputing
    Li Kenli, Yao Fengjuan, Li Renfa, and Xu Jin
    null   
    Abstract692)   HTML1)    PDF (489KB)(822)       Save
    The DNA-based supercomputation has solved hard computational problems such as NP-complete problems in polynomial increasing time by using its superparallel and high-density power. However, almost all the current DNA computing strategies are based on the enumerative method, which causes the size of the initial DNA strands to increase exponentially. How to decrease the number of DNA strands increasing exponentially in these applications is very important in the research on DNA computers. For the objectivity of solution of the knapsack problem which is a famous NP-complete problem with DNA computer, the strategy of divide-and-conquer is introduced into the DNA-based supercomputing and a DNA algorithm is proposed. The proposed algorithm consists of an n-bit parallel subtracter, an n-bit parallel searcher, and other four sub-procedures. It is demonstrated that the proposed algorithms can first reduce the number of DNA library strands from O(2\+q) to O(2\+\{q/2\}) to solve a q-dimension knapsack instance, while keeping the operation time not obviously changed. It is shown that the traditional technology can still bear importance even in designing DNA computer algorithm. Furthermore, this work indicates that the cryptosystems using public key are perhaps insecure, because, theoretically, the 120-variable knapsack public key can be easily broken provided that the technology of DNA computers is mature.
    Related Articles | Metrics
    Cited: Baidu(39)
    Model Counting and Planning Using Extension Rule
    Lai Yong, Ouyang Dantong, Cai Dunbo, and Lü Shuai
    null   
    Abstract741)   HTML1)    PDF (1015KB)(872)       Save
    Methods based on extension rule are new approaches for automated theorem proving and can efficiently solve problems with high complementary factor. In this paper, a new strategy to re-implement ER, which is an algorithm based on the propositional extension rule, is proposed. The new implementation of ER is superior to the original one. Based on this, the extension rule is applied in the following three areas: Firstly, there exist a set of analogous SAT problems being solved in real applications. In contrast with solving these SAT problems separately, an algorithm called nER that solves them as a whole is developed. The algorithm nER exploits the repetition property of ER and generally costs less time than the total time of using ER to solve every problem. Furthermore, based on ER, two new algorithms called #ER and #CDE are proposed, the latter being a combination of #ER and #DPLL. Experimental results show that #ER outperforms #DPLL on a wide range of problems and the #CDE integrates advantages of #ER and #DPLL. Finally, an ER based SAT solver is embedded into the conformant fast-forward to study the potential of ER based methods in artificial intelligence planning. Preliminary results show the efficiency of ER and future research topics.
    Related Articles | Metrics
    Cited: Baidu(35)
    Anomaly Detection of Program Behaviors Based on System Calls and Homogeneous Markov Chain Models
    Tian Xinguang, Gao Lizhi, Sun Chunlai, and Zhang Eryang
    null   
    Abstract793)   HTML2)    PDF (395KB)(647)       Save
    Anomaly detection is the major direction of research in intrusion detection. Presented in this paper is a new method for anomaly detection of program behaviors, which is applicable to host-based intrusion detection systems using system calls as audit data. The method constructs a one-order homogeneous Markov chain to represent the normal behavior profile of a privileged program, and associates the states of the homogeneous Markov chain with the unique system calls in training data. At the detection stage, the occurrence probabilities of the state sequences of the Markov chain are computed, and two different schemes can be used to determine whether the monitored program's behaviors are normal or anomalous while the particularity of program behaviors is taken into account. The method gives attention to both computational efficiency and detection accuracy. It is less computationally expensive than the method based on hidden Markov models introduced by Warrender et al, and is more applicable to on-line detection. Compared with the methods based on system call sequences presented by Hofmeyr and Forrest, the method in this paper can achieve higher detection accuracy. The study empirically demonstrates the promising performance of the method, and it has succeeded in getting application in practical host-based intrusion detection systems.
    Related Articles | Metrics
    Cited: Baidu(29)
    Network Situation Prediction Method Based on Spatial-Time Dimension Analysis
    Liu Yuling1,2,3, Feng Dengguo1,2, Lian Yifeng1,2,3, Chen Kai3, Wu Di1,2
    Journal of Computer Research and Development    2014, 51 (8): 1681-1694.   DOI: 10.7544/issn1000-1239.2014.20121050
    Abstract1739)   HTML7)    PDF (2262KB)(1207)       Save
    Network security situation prediction methods can make the security administrator better understand the network security situation and the network situation trend. However, the existing security situational prediction methods can not precisely reflect the variation of network future security situation caused by security elements' change and do not handle the impact of the interaction relationship between the various security elements of future network security situation. In view of this situation, a network situation prediction method based on spatial-time dimension analysis is presented. The proposed method extracts security elements from attacker, defender and network environment. We predict and analyze these elements from the time dimension in order to provide data for the situation calculation method. Using the predicted elements, the impact value caused by neighbor node's security situation elements is computed based on spatial data mining theory. In combination with node's degree of importance, the security situation value is obtained. To evaluate our methods, MIT Lincoln Lab's public dataset is used to conduct our experiments. The experiments results indicate that our method is suitable for a real network environment. Besides, our method is much more accurate than the ARMA model method.
    Related Articles | Metrics
    Cited: Baidu(22)
    A Kernel and User-Based Collaborative Filtering Recommendation Algorithm
    Wang Peng, Wang Jingjing, and Yu Nenghai
    null   
    Abstract1232)   HTML10)    PDF (1955KB)(852)       Save
    With the development of information technology, people can get more and more information nowadays. To help users find the information that meets their needs or interest among large amount of data, personalized recommendation technology has emerged and flourished. As a most widely used and successful recommendation technique, collaborative filtering algorithm has widely spread and concerned many researchers. Traditional collaborative filtering algorithms face data sparseness and cold start problems. As traditional algorithms only consider the limited data, it is difficult to estimate the accurate similarity between users, as well as the final recommendation results. This paper presents a kernel-density-estimation-based user interest model, and based on this model, a user-based collaborative recommendation algorithm based on kernel method is proposed. Through mining users' latent interest suggested by the limited ratings, the algorithm can well estimate the distribution of users' interest in the item space, and provide a better user similarity calculation method. A distance measurement based on classification similarity is proposed for the kernel methods, and two kernel functions are investigated to estimate the distribution of user interest. KL divergence is utilized to measure the similarity of users' interest distribution. Experiments show that the algorithm can effectively improve the performance of the recommendation system, especially in the case of sparse data.
    Related Articles | Metrics
    Cited: Baidu(20)
    DNA Computation for a Category of Special Integer Planning Problem
    Wang Lei, Lin Yaping, and Li Zhiyong
    null   
    Abstract646)   HTML3)    PDF (375KB)(539)       Save
    DNA computation based on the theory of biochemical reactions has better performance in solving a class of intractable computational problems, especially the NP-complete problems, than traditional computing methods based on the current silicon computers, so it is of great importance to study the DNA computation. The new concepts such as rank of constraint equation group and three kinds of constraint complement links of constraint equation group are proposed, and according to those concepts and on the basis of the method of fluorescence-labeling in the surface-based approach to DNA computation, a novel algorithm based on DNA computation is designed, which solves the problem of optimal solutions to a category of special integer planning. By using the fluorescence-quenching technique to eliminate false solutions from all the possible solutions to the given integer-planning problem, the new algorithm can identify all of the feasible solutons, and then, can obtain all the optimal solutions to the given integer-planning problem by comparing the target-function's value of those feasible solutions. Analyses show that, the new algorithm has some good characteristics such as simple encoding, low cost and short operating time, etc.
    Related Articles | Metrics
    Cited: Baidu(19)
    Fuzzy Neural Network Optimization by a Multi-Objective Particle Swarm Optimization Algorithm
    Ma Ming, Zhou Chunguang, Zhang Libiao, and Ma Jie
    null   
    Abstract669)   HTML2)    PDF (468KB)(692)       Save
    Designing a set of fuzzy neural networks can be considered as solving a multi-objective optimization problem. In the problem, performance and complexity are two conflicting criteria. An algorithm for solving the multi objective optimization problem is presented based on particle swarm optimization through the improvement of the selection manner for global and individual extremum. The search for the Pareto optimal set of fuzzy neural networks optimization problems is performed, and a tradeoff between accuracy and complexity of fuzzy neural networks is clearly shown by obtaining non-dominated solutions. Numerical simulations for taste identification of tea show the effectiveness of the proposed algorithm.
    Related Articles | Metrics
    Cited: Baidu(14)
    A Multi-Agent Social Evolutionary Algorithm for Project Optimization Scheduling
    Pan Xiaoying and Jiao Licheng
    null   
    Abstract466)      PDF (595KB)(736)       Save
    A multi-agent social evolutionary algorithm for the precedence and resource constrained single-mode project optimization scheduling (RCPSP-MASEA) is proposed. RCPSP-MASEEA is used to obtain the optimal scheduling sequences so that the duration of the project is minimized. With the intrinsic properties of RCPSP in mind, the multi-agent systems, social acquaintance net and evolutionary algorithms are integrated to form a new algorithm. In this algorithm, all agents live in lattice-like environment. Making use of the designed behaviors, RCPSP-MASEA realizes the ability of agents to sense and act on the environment in which they live, and the local environments of all the agents are constructed by social acquaintance net. Based on the characteristics of project optimization scheduling, the encoding of solution, the operators such as competitive, crossover and self-learning are given. During the process of interacting with the environment and the other agents, each agent increases energy as much as possible, so that RCPSP-MASEA can find the optima. Through a thorough computational study for a standard set of project instances in PSPLIB, the performance of algorithm is analyzed. The experimental results show RCPSP-MASEA has a good performance and it can reach near-optimal solutions in reasonable times. Compared with other heuristic algorithms, RCPSP-MASEA also has some advantages.
    Related Articles | Metrics
    Cited: Baidu(13)
    Verifiable Secret Redistribution Protocol Based on Additive Sharing
    Yu Jia, Li Daxing, and Fan Yuling
    null   
    Abstract838)   HTML3)    PDF (277KB)(638)       Save
    A non-interactive verifiable secret redistribution protocol based on additive sharing is put forward, which has threshold attribute, too. It can be applied to all the sets of shareholders that can alter the access structure, so the set of new shareholders doesn't need to joint the one of old shareholders. The protocol adopts additive sharing and share back-up technologies, so it can not only verify the correctness of secret shares and subshares, but also recovery bad secret shares. In addition, it can resolve the hard problem of how to identify the set of bad shareholders. It can be transformed into redistribution protocol in proactive RSA conveniently thanks to additive sharing. The protocol is correct, robust and secure, and its performance in many aspects is very high.
    Related Articles | Metrics
    Cited: Baidu(13)
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract5163)   HTML218)    PDF (1747KB)(4977)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)
    An Iterative Gait Prototype Learning Algorithm Based on Tangent Distance
    Chen Changyou and Zhang Junping
    null   
    Abstract329)      PDF (757KB)(580)       Save
    Being the only biometry certification techniques for remote surveillance, gait recognition, on one hand, is regarded as being of important potential value, hence a lot of algorithms have been proposed, on the other hand, it has encountered a lot of challenges. Among all of the challenges gait recognition encountered, one of them is how to extract features efficiently from a sequence of gait frames. To solve this problem, and also based on the fact that gait energy image (GEI) is effective for feature representation, an iterative prototype algorithm based on tangent distance is proposed. Firstly, it is assumed that different gaits lie in different manifolds. As a result, the proposed algorithm refines the definition of gait energy image(GEI) using tangent distance. Then an iterative algorithm is proposed to learn the prototypes by solving an optimization problem. Finally, principal component analysis (PCA) is performed on the prototypes to obtain gait features for classification. The proposed method is proved converged, and experiment results show the promising results of the proposed algorithm in accuracy compared with the GEIs. The rationality of the assumption that gaits lie in specific manifolds is also validated through experiments.
    Related Articles | Metrics
    Cited: Baidu(12)
    An e-Learning Service Discovery Algorithm Based on User Satisfaction
    Zhu Zhengzhou, Wu Zhongfu, and Wu Kaigui
    null   
    Abstract695)   HTML1)    PDF (1132KB)(530)       Save
    There are more and more e-Learning services used in computer supported collaborative learning, hence it is becoming important to locate proper e-Learning services in an accurate and efficient way. In the design of this paper, an annexed algorithm named eLSDAUS is proposed to improve the existing semantic-based e-Learning service matchmaking algorithm. In the algorithm, a new factor—user satisfaction which is the users feeling about the result of service discovery is led-in. This algorithm allows users to take part in the process of e-Learning service discovery, and also allows them evaluate the result of service discovery. Users evaluation in the form of user satisfaction is fed back to the system. Adopting an amendatory function which takes the user satisfaction as input, the system modifies the weights of each property of the advertise service, and then the total match degree of service discovery up to the best. 2 methods are adopted to encourage users to use the e-Learning service discovery system. Experiments indicate that compared with the traditional algorithms, the precision of the service discovery is improved more than 3 percent as the number of advertisement services is up to 10000, and with the increase of advertisement services sum, the effect will be better. After learning for 127 days, over 93% students are satisfied with the e-Learning service discovery result.
    Related Articles | Metrics
    Cited: Baidu(11)
    Nonlinear Diffusion based Image Denoising Coupling Gradient Fidelity Term
    Zhu Lixin, Pheng Ann Heng, and Xia Deshen
    null   
    Abstract671)   HTML0)    PDF (671KB)(502)       Save
    Image denoising with second order non-linear diffusion PDEs often leads to an undesirable staircase effect, namely, the transformation of smooth regions into piecewise constant ones. In this paper, these nonlinear diffusion models are improved by adding the Euler-Lagrange equation derived from the gradient fidelity term which describes the similarity in gradient between the noise images and the restored ones. After coupling the new restriction equation derived from the gradient fidelity term, the classical second order PDE-based denoising models will produce piecewise smooth results, while preserving sharp jump discontinuities in images. The convexity of the proposed model is proved and the existence and uniqueness of optimal solution is ensured. The influence of introducing spatial regularization on the gradient estimation is also analyzed and the importance of proper regularization parameter selection to the final results is emphasized theoretically and experimentally. In addition, the gradient fidelity term is integrable in bounded variation function space which makes the models outperform fourth order nonlinear PDEs based denoising methods suffering from the leakage problems and the sensitivity to high frequency components in images. Experimental results show that the new model alleviates the staircase effect to some extent and preserves the image features well, such as textures and edges.
    Related Articles | Metrics
    Cited: Baidu(9)
    A Self-Adaptive Image Steganography Algorithm Based on Cover-Coding and Markov Model
    Zhang Zhan, Liu Guangjie, Dai Yuewei, and Wang Zhiquan
    null   
    Abstract794)   HTML2)    PDF (2960KB)(432)       Save
    It is a difficulty and hotspot how to desigh steganography algorithms with large-capacity, low-distortion and high statistical security. A self-adaptive image steganography algorithm which takes account of the perceptual distortion and second-order statistical security is proposed. It introduces the smoothness of the various parts of the cover-object to the encoding generation process of cover codes, and reduces the distortion by the reasonable use of a cluster of cover codes in each part of cover-object. In the embedding aspect, in order to improve the statistic security, the algorithm uses a dynamic compensate method based on the image Markov chain model, and it embeds secret information into the least two significant bit (LTSB) planes in order to ensure the capacity. Experiment results show the proposed algorithm has lower distortion and smaller changes of cover statistical distribution than the stochastic LTSB match steganography algorithm and the algorithm which only uses one cover code under the same embedded payload. And the proposed algorithm has larger payloads than one cover code embedding when the distortion and statistical distribution changes are close.
    Related Articles | Metrics
    Cited: Baidu(8)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract9545)   HTML452)    PDF (3265KB)(5983)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    SVM Fast Training Algorithm Research Based on Multi-Lagrange Multiplier
    Ye Ning, Sun Ruixiang, and Dong Yisheng
    null   
    Abstract715)   HTML1)    PDF (383KB)(581)       Save
    A multi-Lagrange multiplier support vector machine fast training method (MLSVM) based on the coordinated optimization of multi-Lagrange multipliers is proposed and the formula to define the feasible field of each multiplier is presented. The algorithm approaches to the most optimization more precisely and quickly due to the analytic expressions adopted in the optimization process of each multiplier. The SMO algorithm is proved to be an instance of MLSVM. Three individual lgorithms, i.e., MLSVM1, MLSVM2 and MLSVM3, are presented under the theoretical guidance of this method according to different learning strategies. The learning speed of MLSVM1 and MLSVM2 is about the same as that of SMO when the test data set is small (<5000). However, they will fail when the test data set becomes larger. MLSVM3 is an improved algorithm of the former two algorithms and the SMO algorithm. It not only overcomes the failure of MLSVM1 and MLSVM2, but also performs faster than the SMO algorithm with an improvement of 7.4% to 4130% on several test data sets.
    Related Articles | Metrics
    Cited: Baidu(8)
    A Study of Speech Recognition Based on RNN-RBM Language Model
    Li Yaxiong, Zhang Jianqiang, Pan Deng, Hu Dan4
    Journal of Computer Research and Development    2014, 51 (9): 1936-1944.   DOI: 10.7544/issn1000-1239.2014.20140211
    Abstract2729)   HTML11)    PDF (1524KB)(1392)       Save
    In the recent years, deep learning is emerging as a new way of multilayer neural networks and back propagation training. Its application in the field of language model, such as restricted Boltzmann machine language model, gets good results. This language model based on neural network can assess the probability of the next word appears according to the word sequence which is mapped to a continuous space. This language model can solve the problem of sparse data. Besides, some scholars are constructing language model making use of recurrent neural network mode in order to make full use of the preceding text to predict the next words. From these models we can sort out the restriction of long-distance dependency in language. This paper attempts to catch the long-distance information based on RNN-RBM. On the other hand, the dynamic adjunction of language model ia analyzed and illustrated according to the language features. The experimental result manifests there are considerable improvement to the efficiency of expanding vocabulary continuing speech recognition using RNN_RBM language model.
    Related Articles | Metrics
    Cited: Baidu(8)
    An Improved Working Set Selection Strategy for Sequential Minimal Optimization Algorithm
    Zeng Zhiqiang, Wu Qun, Liao Beishui, and Zhu Shunzhi
    null   
    Abstract850)   HTML2)    PDF (945KB)(618)       Save
    Working set selection is an important step in the sequential minimal optimization (SMO) type methods for training support vector machine (SVM). However, the feasible direction strategy for selecting working set may degrade the performance of the kernel cache maintained in standard SMO. In this paper, an improved strategy for selecting working set applied in SMO is presented to handle such difficulties based on the decrease of objective function corresponding to second order information. The new strategy takes into consideration both iteration times and kernel cache performance related to the selection of working set in order to improve the efficiency of the kernel cache, which leads to reduction of the number of kernel evaluation of the algorithm as a whole. As a result, the training efficiency of the new method upgrades greatly compared with the older version. On the other hand, the SMO with the new strategy of working set selection is guaranteed to converge to an optimal solution in theory. It is shown in the experiments on the well-known data sets that the proposed method is remarkably faster than the standard SMO. The more complex the kernel is, the higher the dimensional spaces are, and the relatively smaller the cache is, the greater the improvement is.
    Related Articles | Metrics
    Cited: Baidu(7)