ISSN 1000-1239 CN 11-1777/TP

Most cited articles

    Baidu Google scholar CSCD Crossref WebOfScience Sciencedirect
    Published within: In last 1 yearsIn last 2 yearsIn last 3 yearsAll

    Condition: Baidu + All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Grid & P2P Trust Model Based on Recommendation Evidence Reasoning
    Zhu Junmao, Yang Shoubao, Fan Jianping, and Chen Mingyu
    null   
    Abstract457)   HTML0)    PDF (412KB)(495)       Save
    Under mixed computing environment of grid and P2P(Grid & P2P), grid nodes provide the service with QoS guarantee. However, sharing computing resources of P2P no des is the user's volunteer action without QoS guarantee. The user is not respon sible for his actions. Therefore it's difficult to establish the trust relations hip among users with traditional trust mechanism. Referring to social people tru st relationship models, a grid & P2P trust model based on recommendation evidenc e reasoning is designed to solve the problem by building a recommendation mechan ism in Grid & P2P and integrating the recommendation evidence with the D-S theor y. Theoretical analysis and simulations prove that the model can tackle the trus t problem under Grid & P2P in a simple and efficient way.
    Related Articles | Metrics
    Cited: Baidu(71)
    Multicast Scheduling in Buffered Crossbar Switches with Multiple Input Queues
    Sun Shutao, He Simin, Zheng Yanfeng, and Gao Wen,
    null   
    Abstract448)   HTML1)    PDF (489KB)(387)       Save
    The scheduling of multicast traffic in bufferless crossbar switches has been extensively investigated. However, all the proposed solutions are hardly practical for high capacity switches because of either poor performance or high complexity. A buffered crossbar switch with multiple input queues per input port for transferring multicast traffic is proposed. Under this architecture, the scheduler operates in three stages, namely cell assignment, input scheduling, and output scheduling. The scheduling algorithms with complexity from O(1) to higher are presented for different scheduling stages. Simulation results show that both the number of input queues and the size of crosspoint buffer can affect the throughput performance of a buffered crossbar under multicast traffic. However, under bursty multicast traffic, increasing the number of input queues gains more, no matter which algorithm is used, i.e. either HA-RR-RR with complexity O(1) or MMA-MRSF-LQF with higher complexity. This shows that the proposed scheme is more appropriate for high performance switches.
    Related Articles | Metrics
    Cited: Baidu(40)
    Model Counting and Planning Using Extension Rule
    Lai Yong, Ouyang Dantong, Cai Dunbo, and Lü Shuai
    null   
    Abstract375)   HTML0)    PDF (1015KB)(695)       Save
    Methods based on extension rule are new approaches for automated theorem proving and can efficiently solve problems with high complementary factor. In this paper, a new strategy to re-implement ER, which is an algorithm based on the propositional extension rule, is proposed. The new implementation of ER is superior to the original one. Based on this, the extension rule is applied in the following three areas: Firstly, there exist a set of analogous SAT problems being solved in real applications. In contrast with solving these SAT problems separately, an algorithm called nER that solves them as a whole is developed. The algorithm nER exploits the repetition property of ER and generally costs less time than the total time of using ER to solve every problem. Furthermore, based on ER, two new algorithms called #ER and #CDE are proposed, the latter being a combination of #ER and #DPLL. Experimental results show that #ER outperforms #DPLL on a wide range of problems and the #CDE integrates advantages of #ER and #DPLL. Finally, an ER based SAT solver is embedded into the conformant fast-forward to study the potential of ER based methods in artificial intelligence planning. Preliminary results show the efficiency of ER and future research topics.
    Related Articles | Metrics
    Cited: Baidu(31)
    A Kernel and User-Based Collaborative Filtering Recommendation Algorithm
    Wang Peng, Wang Jingjing, and Yu Nenghai
    null   
    Abstract763)   HTML0)    PDF (1955KB)(689)       Save
    With the development of information technology, people can get more and more information nowadays. To help users find the information that meets their needs or interest among large amount of data, personalized recommendation technology has emerged and flourished. As a most widely used and successful recommendation technique, collaborative filtering algorithm has widely spread and concerned many researchers. Traditional collaborative filtering algorithms face data sparseness and cold start problems. As traditional algorithms only consider the limited data, it is difficult to estimate the accurate similarity between users, as well as the final recommendation results. This paper presents a kernel-density-estimation-based user interest model, and based on this model, a user-based collaborative recommendation algorithm based on kernel method is proposed. Through mining users' latent interest suggested by the limited ratings, the algorithm can well estimate the distribution of users' interest in the item space, and provide a better user similarity calculation method. A distance measurement based on classification similarity is proposed for the kernel methods, and two kernel functions are investigated to estimate the distribution of user interest. KL divergence is utilized to measure the similarity of users' interest distribution. Experiments show that the algorithm can effectively improve the performance of the recommendation system, especially in the case of sparse data.
    Related Articles | Metrics
    Cited: Baidu(20)
    DNA Computation for a Category of Special Integer Planning Problem
    Wang Lei, Lin Yaping, and Li Zhiyong
    null   
    Abstract335)   HTML1)    PDF (375KB)(402)       Save
    DNA computation based on the theory of biochemical reactions has better performance in solving a class of intractable computational problems, especially the NP-complete problems, than traditional computing methods based on the current silicon computers, so it is of great importance to study the DNA computation. The new concepts such as rank of constraint equation group and three kinds of constraint complement links of constraint equation group are proposed, and according to those concepts and on the basis of the method of fluorescence-labeling in the surface-based approach to DNA computation, a novel algorithm based on DNA computation is designed, which solves the problem of optimal solutions to a category of special integer planning. By using the fluorescence-quenching technique to eliminate false solutions from all the possible solutions to the given integer-planning problem, the new algorithm can identify all of the feasible solutons, and then, can obtain all the optimal solutions to the given integer-planning problem by comparing the target-function's value of those feasible solutions. Analyses show that, the new algorithm has some good characteristics such as simple encoding, low cost and short operating time, etc.
    Related Articles | Metrics
    Cited: Baidu(19)
    Fuzzy Neural Network Optimization by a Multi-Objective Particle Swarm Optimization Algorithm
    Ma Ming, Zhou Chunguang, Zhang Libiao, and Ma Jie
    null   
    Abstract344)   HTML0)    PDF (468KB)(540)       Save
    Designing a set of fuzzy neural networks can be considered as solving a multi-objective optimization problem. In the problem, performance and complexity are two conflicting criteria. An algorithm for solving the multi objective optimization problem is presented based on particle swarm optimization through the improvement of the selection manner for global and individual extremum. The search for the Pareto optimal set of fuzzy neural networks optimization problems is performed, and a tradeoff between accuracy and complexity of fuzzy neural networks is clearly shown by obtaining non-dominated solutions. Numerical simulations for taste identification of tea show the effectiveness of the proposed algorithm.
    Related Articles | Metrics
    Cited: Baidu(14)
    A Multi-Agent Social Evolutionary Algorithm for Project Optimization Scheduling
    Pan Xiaoying and Jiao Licheng
    null   
    Abstract347)      PDF (595KB)(568)       Save
    A multi-agent social evolutionary algorithm for the precedence and resource constrained single-mode project optimization scheduling (RCPSP-MASEA) is proposed. RCPSP-MASEEA is used to obtain the optimal scheduling sequences so that the duration of the project is minimized. With the intrinsic properties of RCPSP in mind, the multi-agent systems, social acquaintance net and evolutionary algorithms are integrated to form a new algorithm. In this algorithm, all agents live in lattice-like environment. Making use of the designed behaviors, RCPSP-MASEA realizes the ability of agents to sense and act on the environment in which they live, and the local environments of all the agents are constructed by social acquaintance net. Based on the characteristics of project optimization scheduling, the encoding of solution, the operators such as competitive, crossover and self-learning are given. During the process of interacting with the environment and the other agents, each agent increases energy as much as possible, so that RCPSP-MASEA can find the optima. Through a thorough computational study for a standard set of project instances in PSPLIB, the performance of algorithm is analyzed. The experimental results show RCPSP-MASEA has a good performance and it can reach near-optimal solutions in reasonable times. Compared with other heuristic algorithms, RCPSP-MASEA also has some advantages.
    Related Articles | Metrics
    Cited: Baidu(13)
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract2438)   HTML23)    PDF (1747KB)(2824)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)
    An Iterative Gait Prototype Learning Algorithm Based on Tangent Distance
    Chen Changyou and Zhang Junping
    null   
    Abstract270)      PDF (757KB)(530)       Save
    Being the only biometry certification techniques for remote surveillance, gait recognition, on one hand, is regarded as being of important potential value, hence a lot of algorithms have been proposed, on the other hand, it has encountered a lot of challenges. Among all of the challenges gait recognition encountered, one of them is how to extract features efficiently from a sequence of gait frames. To solve this problem, and also based on the fact that gait energy image (GEI) is effective for feature representation, an iterative prototype algorithm based on tangent distance is proposed. Firstly, it is assumed that different gaits lie in different manifolds. As a result, the proposed algorithm refines the definition of gait energy image(GEI) using tangent distance. Then an iterative algorithm is proposed to learn the prototypes by solving an optimization problem. Finally, principal component analysis (PCA) is performed on the prototypes to obtain gait features for classification. The proposed method is proved converged, and experiment results show the promising results of the proposed algorithm in accuracy compared with the GEIs. The rationality of the assumption that gaits lie in specific manifolds is also validated through experiments.
    Related Articles | Metrics
    Cited: Baidu(12)
    An e-Learning Service Discovery Algorithm Based on User Satisfaction
    Zhu Zhengzhou, Wu Zhongfu, and Wu Kaigui
    null   
    Abstract355)   HTML0)    PDF (1132KB)(454)       Save
    There are more and more e-Learning services used in computer supported collaborative learning, hence it is becoming important to locate proper e-Learning services in an accurate and efficient way. In the design of this paper, an annexed algorithm named eLSDAUS is proposed to improve the existing semantic-based e-Learning service matchmaking algorithm. In the algorithm, a new factor—user satisfaction which is the users feeling about the result of service discovery is led-in. This algorithm allows users to take part in the process of e-Learning service discovery, and also allows them evaluate the result of service discovery. Users evaluation in the form of user satisfaction is fed back to the system. Adopting an amendatory function which takes the user satisfaction as input, the system modifies the weights of each property of the advertise service, and then the total match degree of service discovery up to the best. 2 methods are adopted to encourage users to use the e-Learning service discovery system. Experiments indicate that compared with the traditional algorithms, the precision of the service discovery is improved more than 3 percent as the number of advertisement services is up to 10000, and with the increase of advertisement services sum, the effect will be better. After learning for 127 days, over 93% students are satisfied with the e-Learning service discovery result.
    Related Articles | Metrics
    Cited: Baidu(11)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract5464)   HTML81)    PDF (3265KB)(3114)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development   
    Accepted: 30 September 2017
    Online available: 30 September 2017

    A Self-Adaptive Image Steganography Algorithm Based on Cover-Coding and Markov Model
    Zhang Zhan, Liu Guangjie, Dai Yuewei, and Wang Zhiquan
    null   
    Abstract505)   HTML1)    PDF (2960KB)(381)       Save
    It is a difficulty and hotspot how to desigh steganography algorithms with large-capacity, low-distortion and high statistical security. A self-adaptive image steganography algorithm which takes account of the perceptual distortion and second-order statistical security is proposed. It introduces the smoothness of the various parts of the cover-object to the encoding generation process of cover codes, and reduces the distortion by the reasonable use of a cluster of cover codes in each part of cover-object. In the embedding aspect, in order to improve the statistic security, the algorithm uses a dynamic compensate method based on the image Markov chain model, and it embeds secret information into the least two significant bit (LTSB) planes in order to ensure the capacity. Experiment results show the proposed algorithm has lower distortion and smaller changes of cover statistical distribution than the stochastic LTSB match steganography algorithm and the algorithm which only uses one cover code under the same embedded payload. And the proposed algorithm has larger payloads than one cover code embedding when the distortion and statistical distribution changes are close.
    Related Articles | Metrics
    Cited: Baidu(8)
    A KNN-Join Algorithm Based on Δ-Tree for High-Dimensional Data
    Liu Yan, and Hao Zhongxiao,
    null   
    Abstract373)   HTML1)    PDF (1027KB)(493)       Save
    KNN-Join is an important database primitive, which has been successfully applied to speed up applications such as similarity search, data analysis and data mining.Up to now, the KNN-Join has been studied in the context of disk-based systems where it is assumed that the databases are too large to fit into the main memory.This assumption is increasingly being challenged as RAM gets cheaper and larger.Therefore, it is necessary to study the problem of KNN-Join in the main memory. Δ-tree is a novel multi-level index structure that can speed up the high-dimensional query in the main memory environment. In this paper, a new KNN-Join approach is proposed, which uses Δ-tree as the underlying index structure, exploits coding and decoding, bottom up, depth first search and pruning technique.The problem that it is hard to identify the distance from each point in R to its K nearest neighbors in S is solved and the Join efficiency is improved.The correctness verification and cost analysis of the above mentioned algorithm are presented.Extensive experiments on both real datasets and synthetic clustered datasets are conducted, and the results illustrate that Δ-tree-KNN-Join is an efficient KNN-Join method in main memory.
    Related Articles | Metrics
    Cited: Baidu(7)
    A t-closeness Privacy Model Based on Sensitive Attribute Values Semantics Bucketization
    Zhang Jianpei, Xie Jing, Yang Jing, and Zhang Bing
    Journal of Computer Research and Development   
    Real-Time Panoramic Video Stitching Based on GPU Acceleration Using Local ORB Feature Extraction
    Du Chengyao, Yuan Jingling, Chen Mincheng, Li Tao
    Journal of Computer Research and Development    2017, 54 (6): 1316-1325.   DOI: 10.7544/issn1000-1239.2017.20170095
    Abstract2863)   HTML8)    PDF (8791KB)(1434)       Save
    Panoramic video is a sort of video recorded at the same point of view to record the full scene. The collecting devices of panoramic video are getting widespread attention with the development of VR and live-broadcasting video technology. Nevertheless, CPU and GPU are required to possess strong processing abilities to make panoramic video. The traditional panoramic products depend on large equipment or post processing, which results in high power consumption, low stability, unsatisfying performance in real time and negative advantages to the information security. This paper proposes a L-ORB feature detection algorithm. The algorithm optimizes the feature detection regions of the video images and simplifies the support of the ORB algorithm in scale and rotation invariance. Then the features points are matched by the multi-probe LSH algorithm and the progressive sample consensus (PROSAC) is used to eliminate the false matches. Finally, we get the mapping relation of image mosaic and use the multi-band fusion algorithm to eliminate the gap between the video. In addition, we use the Nvidia Jetson TX1 heterogeneous embedded system that integrates ARM A57 CPU and Maxwell GPU, leveraging its Teraflops floating point computing power and built-in video capture, storage, and wireless transmission modules to achieve multi-camera video information real-time panoramic splicing system, the effective use of GPU instructions block, thread, flow parallel strategy to speed up the image stitching algorithm. The experimental results show that the algorithm mentioned can improve the performance in the stages of feature extraction of images stitching and matching, the running speed of which is 11 times than that of the traditional ORB algorithm and 639 times than that of the traditional SIFT algorithm. The performance of the system accomplished in the article is 59 times than that of the former embedded one, while the power dissipation is reduced to 10W.
    Related Articles | Metrics
    Cited: Baidu(3)
    An Enhanced Biometrics-Key-Based Remote User Authentication Scheme with Smart Card
    Xu Qingui, Huang Peican, Yang Taolan
    Journal of Computer Research and Development    2015, 52 (11): 2645-2655.   DOI: 10.7544/issn1000-1239.2015.20140755
    Abstract598)   HTML0)    PDF (1269KB)(528)       Save
    Biometrics-based remote user authentication scheme with smart card enforces triple protection including smartcard hardware, user password authentication and biometrics recognition, which brings new breakthrough to authentication. Khan-Kumari scheme, which is characterized with high security performance, is reviewed. Four defects that may do harm to authentication are found in this scheme: flawed encapsulation of user identity secrets, improper access way of them, lack of message freshness check, and insufficient interaction between authentication parties. An enhanced biometrics-key-based remote user authentication scheme with smart card is put forward in this paper. Our scheme enforces four enhancing procedures: mutal verifiable dual factors are used to protect user identity secrets, and playback messages are recognized based on message freshness check, and protected parameters are transmitted after encrypted with dynamic Hash key integrating time flag, and authentication process is made be completed gracefully with acknowledgement message. With these measures, user identity protection is enhanced remarkably. Hence, resistance against smart card cracking, message replay, identity impersonation and service refusal is aggrandized. Security analysis shows that the enhanced scheme effectually fixes vulnerabilities found in Khan-Kumari scheme with small computation and communication cost, achieving remarkably enhanced security performance in defending against varying attacking means. Under the circumstances that even dual protection measures are compromised, the probability of impersonation and authentication failure caused by attacks can be made be less than 10\+{-38}.
    Related Articles | Metrics
    Cited: Baidu(2)
    Granularity Selections in Generalized Incomplete Multi-Granular Labeled Decision Systems
    Wu Weizhi, Yang Li, Tan Anhui, Xu Youhong
    Journal of Computer Research and Development    2018, 55 (6): 1263-1272.   DOI: 10.7544/issn1000-1239.2018.20170233
    Abstract446)   HTML0)    PDF (940KB)(390)       Save
    Granular computing (GrC), which imitates human being’s thinking, is an approach for knowledge representation and data mining. Its basic computing unit is called granules, and its objective is to establish effective computation models for dealing with large scale complex data and information. The main directions in the study of granular computing are the construction, interpretation, representation of granules, the selection of granularities and relations among granules which are represented by granular IF-THEN rules with granular variables and their relevant granular values. In order to investigate knowledge acquisition in the sense of decision rules in incomplete information systems with multi-granular labels, the concept of generalized incomplete multi-granular labeled information systems is first introduced. Information granules with different labels of granulation as well as their relationships from generalized incomplete multi-granular labeled information systems are then represented. Lower and upper approximations of sets with different levels of granulation are further defined and their properties are presented. The concept of granularity label selections in generalized incomplete multi-granular labeled information systems is also proposed. It is shown that the collection of all granularity label selections forms a complete lattice. Finally, optimal granular label selections in incomplete multi-granular labeled decision tables are also discussed. Belief and plausibility functions in the Dempster-Shafer theory of evidence are employed to characterize optimal granular label selections in consistent incomplete multi-granular labeled decision systems.
    Related Articles | Metrics
    Cited: Baidu(2)
    Immune-Computing-Based Location Planning of Base Station and Relay Station in IEEE 802.16j Network
    Zhu Sifeng, Liu Fang, Chai Zhengyi, and Qi Yutao,
    null   
    Abstract465)   HTML0)    PDF (1097KB)(473)       Save
    The IEEE 802.16j standard can provide coverage and capacity improvements through the introduction of new nodes called relay stations (RS). IEEE 802.16j network has the potential to deliver higher capacity networks at lower cost than more conventional single hop wireless access network. The joint optimization of relay stations and base stations is one of the network planning content for mobile network operators. Because the relay stations can be developed at significantly lower cost than base stations, given a set of candidate sites and network coverage demand, the number of base stations which need deploying can decrease through the joint optimization of relay stations and base stations, and the total cost of the network construction can be reduced. To solve the problem of location planning of base station and relay station in IEEE 802.16j relay network, a solution of location planning based on immune algorithm is proposed. The mathematical model of location planning is expounded, the framework of immune optimization algorithm is given, and simulation experiments are conducted to validate the algorithm. Experimental result shows that the proposed solution of location planning can obtain good network capacity with low cost of network construction, and has the advantages of good application value.
    Related Articles | Metrics
    Cited: Baidu(1)
    Minimized Upper Bound for #3-SAT Problem in the Worst Case
    Zhou Junping, Yin Minghao, Zhou Chunguang, Zhai Yandong, and Wang Kangping
    null   
    Abstract436)   HTML0)    PDF (854KB)(353)       Save
    Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula. The rigorous theoretical analysis of algorithms for solving #SAT have been proposed in the literature. The time complexity is calculated based on the size of the #SAT instances, depending on not only the number of variables, but also the number of clauses. Researches on the worst case upper bound for #SAT with the number of clauses as the parameter can not only measure the efficiency of the algorithms, but also correctly reflect their performance. Therefore, it is significant to exploit the minimized upper bound of #SAT problem in the worst case using the number of clauses as the parameter. In this paper, we firstly analyze the CER algorithm which we have proposed for solving #SAT and prove an upper bound O(2\+m) regarding the number of clauses as the parameter. In order to increase the efficiency, an algorithm MCDP based on Davis-Putnam-Logemann-Loveland (DPLL) for solving #3-SAT is presented. By analyzing the algorithm, we obtain the worst-case upper bound O(1.8393\+m) for #3-SAT, where m is the number of clauses in a formula.
    Related Articles | Metrics
    Cited: Baidu(1)