ISSN 1000-1239 CN 11-1777/TP

Most cited articles

    Baidu Google scholar CSCD Crossref WebOfScience Sciencedirect
    Published within: In last 1 yearsIn last 2 yearsIn last 3 yearsAll

    Condition: Baidu + All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Grid & P2P Trust Model Based on Recommendation Evidence Reasoning
    Zhu Junmao, Yang Shoubao, Fan Jianping, and Chen Mingyu
    null   
    Abstract614)   HTML3)    PDF (412KB)(586)       Save
    Under mixed computing environment of grid and P2P(Grid & P2P), grid nodes provide the service with QoS guarantee. However, sharing computing resources of P2P no des is the user's volunteer action without QoS guarantee. The user is not respon sible for his actions. Therefore it's difficult to establish the trust relations hip among users with traditional trust mechanism. Referring to social people tru st relationship models, a grid & P2P trust model based on recommendation evidenc e reasoning is designed to solve the problem by building a recommendation mechan ism in Grid & P2P and integrating the recommendation evidence with the D-S theor y. Theoretical analysis and simulations prove that the model can tackle the trus t problem under Grid & P2P in a simple and efficient way.
    Related Articles | Metrics
    Cited: Baidu(71)
    Multicast Scheduling in Buffered Crossbar Switches with Multiple Input Queues
    Sun Shutao, He Simin, Zheng Yanfeng, and Gao Wen,
    null   
    Abstract570)   HTML2)    PDF (489KB)(420)       Save
    The scheduling of multicast traffic in bufferless crossbar switches has been extensively investigated. However, all the proposed solutions are hardly practical for high capacity switches because of either poor performance or high complexity. A buffered crossbar switch with multiple input queues per input port for transferring multicast traffic is proposed. Under this architecture, the scheduler operates in three stages, namely cell assignment, input scheduling, and output scheduling. The scheduling algorithms with complexity from O(1) to higher are presented for different scheduling stages. Simulation results show that both the number of input queues and the size of crosspoint buffer can affect the throughput performance of a buffered crossbar under multicast traffic. However, under bursty multicast traffic, increasing the number of input queues gains more, no matter which algorithm is used, i.e. either HA-RR-RR with complexity O(1) or MMA-MRSF-LQF with higher complexity. This shows that the proposed scheme is more appropriate for high performance switches.
    Related Articles | Metrics
    Cited: Baidu(40)
    Model Counting and Planning Using Extension Rule
    Lai Yong, Ouyang Dantong, Cai Dunbo, and Lü Shuai
    null   
    Abstract519)   HTML1)    PDF (1015KB)(821)       Save
    Methods based on extension rule are new approaches for automated theorem proving and can efficiently solve problems with high complementary factor. In this paper, a new strategy to re-implement ER, which is an algorithm based on the propositional extension rule, is proposed. The new implementation of ER is superior to the original one. Based on this, the extension rule is applied in the following three areas: Firstly, there exist a set of analogous SAT problems being solved in real applications. In contrast with solving these SAT problems separately, an algorithm called nER that solves them as a whole is developed. The algorithm nER exploits the repetition property of ER and generally costs less time than the total time of using ER to solve every problem. Furthermore, based on ER, two new algorithms called #ER and #CDE are proposed, the latter being a combination of #ER and #DPLL. Experimental results show that #ER outperforms #DPLL on a wide range of problems and the #CDE integrates advantages of #ER and #DPLL. Finally, an ER based SAT solver is embedded into the conformant fast-forward to study the potential of ER based methods in artificial intelligence planning. Preliminary results show the efficiency of ER and future research topics.
    Related Articles | Metrics
    Cited: Baidu(35)
    Network Situation Prediction Method Based on Spatial-Time Dimension Analysis
    Liu Yuling1,2,3, Feng Dengguo1,2, Lian Yifeng1,2,3, Chen Kai3, Wu Di1,2
    Journal of Computer Research and Development    2014, 51 (8): 1681-1694.   DOI: 10.7544/issn1000-1239.2014.20121050
    Abstract1330)   HTML4)    PDF (2262KB)(1118)       Save
    Network security situation prediction methods can make the security administrator better understand the network security situation and the network situation trend. However, the existing security situational prediction methods can not precisely reflect the variation of network future security situation caused by security elements' change and do not handle the impact of the interaction relationship between the various security elements of future network security situation. In view of this situation, a network situation prediction method based on spatial-time dimension analysis is presented. The proposed method extracts security elements from attacker, defender and network environment. We predict and analyze these elements from the time dimension in order to provide data for the situation calculation method. Using the predicted elements, the impact value caused by neighbor node's security situation elements is computed based on spatial data mining theory. In combination with node's degree of importance, the security situation value is obtained. To evaluate our methods, MIT Lincoln Lab's public dataset is used to conduct our experiments. The experiments results indicate that our method is suitable for a real network environment. Besides, our method is much more accurate than the ARMA model method.
    Related Articles | Metrics
    Cited: Baidu(22)
    A Kernel and User-Based Collaborative Filtering Recommendation Algorithm
    Wang Peng, Wang Jingjing, and Yu Nenghai
    null   
    Abstract968)   HTML6)    PDF (1955KB)(821)       Save
    With the development of information technology, people can get more and more information nowadays. To help users find the information that meets their needs or interest among large amount of data, personalized recommendation technology has emerged and flourished. As a most widely used and successful recommendation technique, collaborative filtering algorithm has widely spread and concerned many researchers. Traditional collaborative filtering algorithms face data sparseness and cold start problems. As traditional algorithms only consider the limited data, it is difficult to estimate the accurate similarity between users, as well as the final recommendation results. This paper presents a kernel-density-estimation-based user interest model, and based on this model, a user-based collaborative recommendation algorithm based on kernel method is proposed. Through mining users' latent interest suggested by the limited ratings, the algorithm can well estimate the distribution of users' interest in the item space, and provide a better user similarity calculation method. A distance measurement based on classification similarity is proposed for the kernel methods, and two kernel functions are investigated to estimate the distribution of user interest. KL divergence is utilized to measure the similarity of users' interest distribution. Experiments show that the algorithm can effectively improve the performance of the recommendation system, especially in the case of sparse data.
    Related Articles | Metrics
    Cited: Baidu(20)
    DNA Computation for a Category of Special Integer Planning Problem
    Wang Lei, Lin Yaping, and Li Zhiyong
    null   
    Abstract428)   HTML3)    PDF (375KB)(460)       Save
    DNA computation based on the theory of biochemical reactions has better performance in solving a class of intractable computational problems, especially the NP-complete problems, than traditional computing methods based on the current silicon computers, so it is of great importance to study the DNA computation. The new concepts such as rank of constraint equation group and three kinds of constraint complement links of constraint equation group are proposed, and according to those concepts and on the basis of the method of fluorescence-labeling in the surface-based approach to DNA computation, a novel algorithm based on DNA computation is designed, which solves the problem of optimal solutions to a category of special integer planning. By using the fluorescence-quenching technique to eliminate false solutions from all the possible solutions to the given integer-planning problem, the new algorithm can identify all of the feasible solutons, and then, can obtain all the optimal solutions to the given integer-planning problem by comparing the target-function's value of those feasible solutions. Analyses show that, the new algorithm has some good characteristics such as simple encoding, low cost and short operating time, etc.
    Related Articles | Metrics
    Cited: Baidu(19)
    Fuzzy Neural Network Optimization by a Multi-Objective Particle Swarm Optimization Algorithm
    Ma Ming, Zhou Chunguang, Zhang Libiao, and Ma Jie
    null   
    Abstract465)   HTML1)    PDF (468KB)(605)       Save
    Designing a set of fuzzy neural networks can be considered as solving a multi-objective optimization problem. In the problem, performance and complexity are two conflicting criteria. An algorithm for solving the multi objective optimization problem is presented based on particle swarm optimization through the improvement of the selection manner for global and individual extremum. The search for the Pareto optimal set of fuzzy neural networks optimization problems is performed, and a tradeoff between accuracy and complexity of fuzzy neural networks is clearly shown by obtaining non-dominated solutions. Numerical simulations for taste identification of tea show the effectiveness of the proposed algorithm.
    Related Articles | Metrics
    Cited: Baidu(14)
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract4117)   HTML115)    PDF (1747KB)(4182)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)
    A Multi-Agent Social Evolutionary Algorithm for Project Optimization Scheduling
    Pan Xiaoying and Jiao Licheng
    null   
    Abstract400)      PDF (595KB)(653)       Save
    A multi-agent social evolutionary algorithm for the precedence and resource constrained single-mode project optimization scheduling (RCPSP-MASEA) is proposed. RCPSP-MASEEA is used to obtain the optimal scheduling sequences so that the duration of the project is minimized. With the intrinsic properties of RCPSP in mind, the multi-agent systems, social acquaintance net and evolutionary algorithms are integrated to form a new algorithm. In this algorithm, all agents live in lattice-like environment. Making use of the designed behaviors, RCPSP-MASEA realizes the ability of agents to sense and act on the environment in which they live, and the local environments of all the agents are constructed by social acquaintance net. Based on the characteristics of project optimization scheduling, the encoding of solution, the operators such as competitive, crossover and self-learning are given. During the process of interacting with the environment and the other agents, each agent increases energy as much as possible, so that RCPSP-MASEA can find the optima. Through a thorough computational study for a standard set of project instances in PSPLIB, the performance of algorithm is analyzed. The experimental results show RCPSP-MASEA has a good performance and it can reach near-optimal solutions in reasonable times. Compared with other heuristic algorithms, RCPSP-MASEA also has some advantages.
    Related Articles | Metrics
    Cited: Baidu(13)
    An Iterative Gait Prototype Learning Algorithm Based on Tangent Distance
    Chen Changyou and Zhang Junping
    null   
    Abstract299)      PDF (757KB)(567)       Save
    Being the only biometry certification techniques for remote surveillance, gait recognition, on one hand, is regarded as being of important potential value, hence a lot of algorithms have been proposed, on the other hand, it has encountered a lot of challenges. Among all of the challenges gait recognition encountered, one of them is how to extract features efficiently from a sequence of gait frames. To solve this problem, and also based on the fact that gait energy image (GEI) is effective for feature representation, an iterative prototype algorithm based on tangent distance is proposed. Firstly, it is assumed that different gaits lie in different manifolds. As a result, the proposed algorithm refines the definition of gait energy image(GEI) using tangent distance. Then an iterative algorithm is proposed to learn the prototypes by solving an optimization problem. Finally, principal component analysis (PCA) is performed on the prototypes to obtain gait features for classification. The proposed method is proved converged, and experiment results show the promising results of the proposed algorithm in accuracy compared with the GEIs. The rationality of the assumption that gaits lie in specific manifolds is also validated through experiments.
    Related Articles | Metrics
    Cited: Baidu(12)
    An e-Learning Service Discovery Algorithm Based on User Satisfaction
    Zhu Zhengzhou, Wu Zhongfu, and Wu Kaigui
    null   
    Abstract477)   HTML1)    PDF (1132KB)(520)       Save
    There are more and more e-Learning services used in computer supported collaborative learning, hence it is becoming important to locate proper e-Learning services in an accurate and efficient way. In the design of this paper, an annexed algorithm named eLSDAUS is proposed to improve the existing semantic-based e-Learning service matchmaking algorithm. In the algorithm, a new factor—user satisfaction which is the users feeling about the result of service discovery is led-in. This algorithm allows users to take part in the process of e-Learning service discovery, and also allows them evaluate the result of service discovery. Users evaluation in the form of user satisfaction is fed back to the system. Adopting an amendatory function which takes the user satisfaction as input, the system modifies the weights of each property of the advertise service, and then the total match degree of service discovery up to the best. 2 methods are adopted to encourage users to use the e-Learning service discovery system. Experiments indicate that compared with the traditional algorithms, the precision of the service discovery is improved more than 3 percent as the number of advertisement services is up to 10000, and with the increase of advertisement services sum, the effect will be better. After learning for 127 days, over 93% students are satisfied with the e-Learning service discovery result.
    Related Articles | Metrics
    Cited: Baidu(11)
    A Self-Adaptive Image Steganography Algorithm Based on Cover-Coding and Markov Model
    Zhang Zhan, Liu Guangjie, Dai Yuewei, and Wang Zhiquan
    null   
    Abstract639)   HTML2)    PDF (2960KB)(416)       Save
    It is a difficulty and hotspot how to desigh steganography algorithms with large-capacity, low-distortion and high statistical security. A self-adaptive image steganography algorithm which takes account of the perceptual distortion and second-order statistical security is proposed. It introduces the smoothness of the various parts of the cover-object to the encoding generation process of cover codes, and reduces the distortion by the reasonable use of a cluster of cover codes in each part of cover-object. In the embedding aspect, in order to improve the statistic security, the algorithm uses a dynamic compensate method based on the image Markov chain model, and it embeds secret information into the least two significant bit (LTSB) planes in order to ensure the capacity. Experiment results show the proposed algorithm has lower distortion and smaller changes of cover statistical distribution than the stochastic LTSB match steganography algorithm and the algorithm which only uses one cover code under the same embedded payload. And the proposed algorithm has larger payloads than one cover code embedding when the distortion and statistical distribution changes are close.
    Related Articles | Metrics
    Cited: Baidu(8)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract7585)   HTML233)    PDF (3265KB)(4849)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development   
    Accepted: 30 September 2017
    Online available: 30 September 2017

    A KNN-Join Algorithm Based on Δ-Tree for High-Dimensional Data
    Liu Yan, and Hao Zhongxiao,
    null   
    Abstract505)   HTML2)    PDF (1027KB)(544)       Save
    KNN-Join is an important database primitive, which has been successfully applied to speed up applications such as similarity search, data analysis and data mining.Up to now, the KNN-Join has been studied in the context of disk-based systems where it is assumed that the databases are too large to fit into the main memory.This assumption is increasingly being challenged as RAM gets cheaper and larger.Therefore, it is necessary to study the problem of KNN-Join in the main memory. Δ-tree is a novel multi-level index structure that can speed up the high-dimensional query in the main memory environment. In this paper, a new KNN-Join approach is proposed, which uses Δ-tree as the underlying index structure, exploits coding and decoding, bottom up, depth first search and pruning technique.The problem that it is hard to identify the distance from each point in R to its K nearest neighbors in S is solved and the Join efficiency is improved.The correctness verification and cost analysis of the above mentioned algorithm are presented.Extensive experiments on both real datasets and synthetic clustered datasets are conducted, and the results illustrate that Δ-tree-KNN-Join is an efficient KNN-Join method in main memory.
    Related Articles | Metrics
    Cited: Baidu(7)
    An Improved Working Set Selection Strategy for Sequential Minimal Optimization Algorithm
    Zeng Zhiqiang, Wu Qun, Liao Beishui, and Zhu Shunzhi
    null   
    Abstract680)   HTML2)    PDF (945KB)(542)       Save
    Working set selection is an important step in the sequential minimal optimization (SMO) type methods for training support vector machine (SVM). However, the feasible direction strategy for selecting working set may degrade the performance of the kernel cache maintained in standard SMO. In this paper, an improved strategy for selecting working set applied in SMO is presented to handle such difficulties based on the decrease of objective function corresponding to second order information. The new strategy takes into consideration both iteration times and kernel cache performance related to the selection of working set in order to improve the efficiency of the kernel cache, which leads to reduction of the number of kernel evaluation of the algorithm as a whole. As a result, the training efficiency of the new method upgrades greatly compared with the older version. On the other hand, the SMO with the new strategy of working set selection is guaranteed to converge to an optimal solution in theory. It is shown in the experiments on the well-known data sets that the proposed method is remarkably faster than the standard SMO. The more complex the kernel is, the higher the dimensional spaces are, and the relatively smaller the cache is, the greater the improvement is.
    Related Articles | Metrics
    Cited: Baidu(7)
    Detecion Approach for Covert Channel Based on Concurrency Conflict Interval Time
    Wang Yongji, Wu Jingzheng, Ding Liping, and Zeng Haitao
    null   
    Abstract609)   HTML1)    PDF (2800KB)(377)       Save
    Concurrency conflicts may bring data conflict covert channel in multilevel secure systems. The existing covert channel detection methods have the following flaws: 1) Analyzing conflict records with single point, so the invaders can evade to be detected; 2) Using single indicator will bring false positive and false negative. We present a detection method based on conflict interval time called CTIBDA in this paper. This method solves the above problems: 1) Analyzing the conflict records with subject and object can prevent intruders from dispersing; 2) Using both the distribution and the sequence of intervals between transactions conflicts as indicators. The experimental results show that this approach can reduce the false positive and false negative and increase the accuracy. CTIBDA is suitable for online implementation and can be universally applied to concurrency conflict covert channels in other scenarios.
    Related Articles | Metrics
    Cited: Baidu(6)
    A t-closeness Privacy Model Based on Sensitive Attribute Values Semantics Bucketization
    Zhang Jianpei, Xie Jing, Yang Jing, and Zhang Bing
    Journal of Computer Research and Development   
    Branch Obfuscation: An Efficient Binary Code Obfuscation to Impede Symbolic Execution
    Jia Chunfu, Wang Zhi, Liu Xin, and Liu Xinhai
    null   
    Abstract659)   HTML2)    PDF (1267KB)(548)       Save
    Symbolic execution can collect branch conditions along a concrete execution trace of a program and build a logical formula describing the program execution path. Then the logical formula is used to reason about the dependence of control flow on user inputs, network inputs and other inputs from the execution environment, which can be used to effectively direct dynamic analysis to explore execution path space of the program. Symbolic execution has been widely used in vulnerability detection, code reuse, protocol analysis and so on. But it can be also used for malicious purposes, e.g., software cracking, software tampering and software piracy. The reverse engineering based on symbolic execution is a new threat to software protection. This paper proposes a novel binary code obfuscation scheme that obfuscates branch conditions to make it difficult for symbolic execution to collect branch conditions from the execution trace. It conceals branch information by substituting conditional jump instructions with conditional exception codes and uses exception handler to transfer control. It also introduces opaque predicates into the obfuscated program to impede statistical analysis. Furthermore, this paper provides insight into the potency, resilience and cost of the branch obfuscation. Experimental result shows that branch obfuscation is able to protect various branch conditions and reduces the leakage of branch information at run-time that impedes reverse engineering based on symbolic execution to analyze programs internal logic.
    Related Articles | Metrics
    Cited: Baidu(5)
    Study of Multi-Agent Trust Coalition Based on Self-Organization Evolution
    Cheng Bailiang, Zeng Guosun, and Jie Anquan
    null   
    Abstract521)   HTML1)    PDF (1681KB)(457)       Save
    Different from the existing game theory used in multi-agent coalition, this paper studies it from coalition trust. The trust degree of individual is built on history cooperating information and then the trust degree of coalition is built above it for trusting coalition. To finish more complex task, some small coalitions unite to a large coalition by coalition trust and competitive negotiation. This way makes trust run through the total process of evolvement of coalition with describing the self-organization of evolvement. In order to get stable coalition, the free competition and trust evaluation is used for distributing income among coalition. For the effective and fair distributing mechanism, the frame behavior in free competition will be eliminated by amending trust degree with the protection of private data after distributing income. Using trust, the produce structure and evolution process of coalition is described and the stable coalition is obtained by fair income distribution with private protection. The distributed cooperation model is built and the computing complex is reduced greatly by coalition trust with controllable venture income. Trust coalition will provide an effective guarantee for dynamic coalition.
    Related Articles | Metrics
    Cited: Baidu(4)