ISSN 1000-1239 CN 11-1777/TP

Most cited articles

    Baidu Google scholar CSCD Crossref WebOfScience Sciencedirect
    Published within: In last 1 yearsIn last 2 yearsIn last 3 yearsAll

    Condition: Baidu + All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Grid & P2P Trust Model Based on Recommendation Evidence Reasoning
    Zhu Junmao, Yang Shoubao, Fan Jianping, and Chen Mingyu
    Abstract423)   HTML0)    PDF (412KB)(486)       Save
    Under mixed computing environment of grid and P2P(Grid & P2P), grid nodes provide the service with QoS guarantee. However, sharing computing resources of P2P no des is the user's volunteer action without QoS guarantee. The user is not respon sible for his actions. Therefore it's difficult to establish the trust relations hip among users with traditional trust mechanism. Referring to social people tru st relationship models, a grid & P2P trust model based on recommendation evidenc e reasoning is designed to solve the problem by building a recommendation mechan ism in Grid & P2P and integrating the recommendation evidence with the D-S theor y. Theoretical analysis and simulations prove that the model can tackle the trus t problem under Grid & P2P in a simple and efficient way.
    Related Articles | Metrics
    Cited: Baidu(71)
    Multicast Scheduling in Buffered Crossbar Switches with Multiple Input Queues
    Sun Shutao, He Simin, Zheng Yanfeng, and Gao Wen,
    Abstract435)   HTML1)    PDF (489KB)(386)       Save
    The scheduling of multicast traffic in bufferless crossbar switches has been extensively investigated. However, all the proposed solutions are hardly practical for high capacity switches because of either poor performance or high complexity. A buffered crossbar switch with multiple input queues per input port for transferring multicast traffic is proposed. Under this architecture, the scheduler operates in three stages, namely cell assignment, input scheduling, and output scheduling. The scheduling algorithms with complexity from O(1) to higher are presented for different scheduling stages. Simulation results show that both the number of input queues and the size of crosspoint buffer can affect the throughput performance of a buffered crossbar under multicast traffic. However, under bursty multicast traffic, increasing the number of input queues gains more, no matter which algorithm is used, i.e. either HA-RR-RR with complexity O(1) or MMA-MRSF-LQF with higher complexity. This shows that the proposed scheme is more appropriate for high performance switches.
    Related Articles | Metrics
    Cited: Baidu(40)
    Model Counting and Planning Using Extension Rule
    Lai Yong, Ouyang Dantong, Cai Dunbo, and Lü Shuai
    Abstract348)   HTML0)    PDF (1015KB)(597)       Save
    Methods based on extension rule are new approaches for automated theorem proving and can efficiently solve problems with high complementary factor. In this paper, a new strategy to re-implement ER, which is an algorithm based on the propositional extension rule, is proposed. The new implementation of ER is superior to the original one. Based on this, the extension rule is applied in the following three areas: Firstly, there exist a set of analogous SAT problems being solved in real applications. In contrast with solving these SAT problems separately, an algorithm called nER that solves them as a whole is developed. The algorithm nER exploits the repetition property of ER and generally costs less time than the total time of using ER to solve every problem. Furthermore, based on ER, two new algorithms called #ER and #CDE are proposed, the latter being a combination of #ER and #DPLL. Experimental results show that #ER outperforms #DPLL on a wide range of problems and the #CDE integrates advantages of #ER and #DPLL. Finally, an ER based SAT solver is embedded into the conformant fast-forward to study the potential of ER based methods in artificial intelligence planning. Preliminary results show the efficiency of ER and future research topics.
    Related Articles | Metrics
    Cited: Baidu(31)
    DNA Computation for a Category of Special Integer Planning Problem
    Wang Lei, Lin Yaping, and Li Zhiyong
    Abstract325)   HTML1)    PDF (375KB)(398)       Save
    DNA computation based on the theory of biochemical reactions has better performance in solving a class of intractable computational problems, especially the NP-complete problems, than traditional computing methods based on the current silicon computers, so it is of great importance to study the DNA computation. The new concepts such as rank of constraint equation group and three kinds of constraint complement links of constraint equation group are proposed, and according to those concepts and on the basis of the method of fluorescence-labeling in the surface-based approach to DNA computation, a novel algorithm based on DNA computation is designed, which solves the problem of optimal solutions to a category of special integer planning. By using the fluorescence-quenching technique to eliminate false solutions from all the possible solutions to the given integer-planning problem, the new algorithm can identify all of the feasible solutons, and then, can obtain all the optimal solutions to the given integer-planning problem by comparing the target-function's value of those feasible solutions. Analyses show that, the new algorithm has some good characteristics such as simple encoding, low cost and short operating time, etc.
    Related Articles | Metrics
    Cited: Baidu(19)
    Fuzzy Neural Network Optimization by a Multi-Objective Particle Swarm Optimization Algorithm
    Ma Ming, Zhou Chunguang, Zhang Libiao, and Ma Jie
    Abstract332)   HTML0)    PDF (468KB)(506)       Save
    Designing a set of fuzzy neural networks can be considered as solving a multi-objective optimization problem. In the problem, performance and complexity are two conflicting criteria. An algorithm for solving the multi objective optimization problem is presented based on particle swarm optimization through the improvement of the selection manner for global and individual extremum. The search for the Pareto optimal set of fuzzy neural networks optimization problems is performed, and a tradeoff between accuracy and complexity of fuzzy neural networks is clearly shown by obtaining non-dominated solutions. Numerical simulations for taste identification of tea show the effectiveness of the proposed algorithm.
    Related Articles | Metrics
    Cited: Baidu(14)
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract2238)   HTML19)    PDF (1747KB)(2681)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)
    An e-Learning Service Discovery Algorithm Based on User Satisfaction
    Zhu Zhengzhou, Wu Zhongfu, and Wu Kaigui
    Abstract331)   HTML0)    PDF (1132KB)(448)       Save
    There are more and more e-Learning services used in computer supported collaborative learning, hence it is becoming important to locate proper e-Learning services in an accurate and efficient way. In the design of this paper, an annexed algorithm named eLSDAUS is proposed to improve the existing semantic-based e-Learning service matchmaking algorithm. In the algorithm, a new factor—user satisfaction which is the users feeling about the result of service discovery is led-in. This algorithm allows users to take part in the process of e-Learning service discovery, and also allows them evaluate the result of service discovery. Users evaluation in the form of user satisfaction is fed back to the system. Adopting an amendatory function which takes the user satisfaction as input, the system modifies the weights of each property of the advertise service, and then the total match degree of service discovery up to the best. 2 methods are adopted to encourage users to use the e-Learning service discovery system. Experiments indicate that compared with the traditional algorithms, the precision of the service discovery is improved more than 3 percent as the number of advertisement services is up to 10000, and with the increase of advertisement services sum, the effect will be better. After learning for 127 days, over 93% students are satisfied with the e-Learning service discovery result.
    Related Articles | Metrics
    Cited: Baidu(11)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract4875)   HTML46)    PDF (3265KB)(2875)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development   
    Accepted: 30 September 2017
    Online available: 30 September 2017

    Real-Time Panoramic Video Stitching Based on GPU Acceleration Using Local ORB Feature Extraction
    Du Chengyao, Yuan Jingling, Chen Mincheng, Li Tao
    Journal of Computer Research and Development    2017, 54 (6): 1316-1325.   DOI: 10.7544/issn1000-1239.2017.20170095
    Abstract2712)   HTML7)    PDF (8791KB)(1359)       Save
    Panoramic video is a sort of video recorded at the same point of view to record the full scene. The collecting devices of panoramic video are getting widespread attention with the development of VR and live-broadcasting video technology. Nevertheless, CPU and GPU are required to possess strong processing abilities to make panoramic video. The traditional panoramic products depend on large equipment or post processing, which results in high power consumption, low stability, unsatisfying performance in real time and negative advantages to the information security. This paper proposes a L-ORB feature detection algorithm. The algorithm optimizes the feature detection regions of the video images and simplifies the support of the ORB algorithm in scale and rotation invariance. Then the features points are matched by the multi-probe LSH algorithm and the progressive sample consensus (PROSAC) is used to eliminate the false matches. Finally, we get the mapping relation of image mosaic and use the multi-band fusion algorithm to eliminate the gap between the video. In addition, we use the Nvidia Jetson TX1 heterogeneous embedded system that integrates ARM A57 CPU and Maxwell GPU, leveraging its Teraflops floating point computing power and built-in video capture, storage, and wireless transmission modules to achieve multi-camera video information real-time panoramic splicing system, the effective use of GPU instructions block, thread, flow parallel strategy to speed up the image stitching algorithm. The experimental results show that the algorithm mentioned can improve the performance in the stages of feature extraction of images stitching and matching, the running speed of which is 11 times than that of the traditional ORB algorithm and 639 times than that of the traditional SIFT algorithm. The performance of the system accomplished in the article is 59 times than that of the former embedded one, while the power dissipation is reduced to 10W.
    Related Articles | Metrics
    Cited: Baidu(3)
    An Enhanced Biometrics-Key-Based Remote User Authentication Scheme with Smart Card
    Xu Qingui, Huang Peican, Yang Taolan
    Journal of Computer Research and Development    2015, 52 (11): 2645-2655.   DOI: 10.7544/issn1000-1239.2015.20140755
    Abstract561)   HTML0)    PDF (1269KB)(521)       Save
    Biometrics-based remote user authentication scheme with smart card enforces triple protection including smartcard hardware, user password authentication and biometrics recognition, which brings new breakthrough to authentication. Khan-Kumari scheme, which is characterized with high security performance, is reviewed. Four defects that may do harm to authentication are found in this scheme: flawed encapsulation of user identity secrets, improper access way of them, lack of message freshness check, and insufficient interaction between authentication parties. An enhanced biometrics-key-based remote user authentication scheme with smart card is put forward in this paper. Our scheme enforces four enhancing procedures: mutal verifiable dual factors are used to protect user identity secrets, and playback messages are recognized based on message freshness check, and protected parameters are transmitted after encrypted with dynamic Hash key integrating time flag, and authentication process is made be completed gracefully with acknowledgement message. With these measures, user identity protection is enhanced remarkably. Hence, resistance against smart card cracking, message replay, identity impersonation and service refusal is aggrandized. Security analysis shows that the enhanced scheme effectually fixes vulnerabilities found in Khan-Kumari scheme with small computation and communication cost, achieving remarkably enhanced security performance in defending against varying attacking means. Under the circumstances that even dual protection measures are compromised, the probability of impersonation and authentication failure caused by attacks can be made be less than 10\+{-38}.
    Related Articles | Metrics
    Cited: Baidu(2)
    Granularity Selections in Generalized Incomplete Multi-Granular Labeled Decision Systems
    Wu Weizhi, Yang Li, Tan Anhui, Xu Youhong
    Journal of Computer Research and Development    2018, 55 (6): 1263-1272.   DOI: 10.7544/issn1000-1239.2018.20170233
    Abstract404)   HTML0)    PDF (940KB)(379)       Save
    Granular computing (GrC), which imitates human being’s thinking, is an approach for knowledge representation and data mining. Its basic computing unit is called granules, and its objective is to establish effective computation models for dealing with large scale complex data and information. The main directions in the study of granular computing are the construction, interpretation, representation of granules, the selection of granularities and relations among granules which are represented by granular IF-THEN rules with granular variables and their relevant granular values. In order to investigate knowledge acquisition in the sense of decision rules in incomplete information systems with multi-granular labels, the concept of generalized incomplete multi-granular labeled information systems is first introduced. Information granules with different labels of granulation as well as their relationships from generalized incomplete multi-granular labeled information systems are then represented. Lower and upper approximations of sets with different levels of granulation are further defined and their properties are presented. The concept of granularity label selections in generalized incomplete multi-granular labeled information systems is also proposed. It is shown that the collection of all granularity label selections forms a complete lattice. Finally, optimal granular label selections in incomplete multi-granular labeled decision tables are also discussed. Belief and plausibility functions in the Dempster-Shafer theory of evidence are employed to characterize optimal granular label selections in consistent incomplete multi-granular labeled decision systems.
    Related Articles | Metrics
    Cited: Baidu(2)
    Immune-Computing-Based Location Planning of Base Station and Relay Station in IEEE 802.16j Network
    Zhu Sifeng, Liu Fang, Chai Zhengyi, and Qi Yutao,
    Abstract443)   HTML0)    PDF (1097KB)(470)       Save
    The IEEE 802.16j standard can provide coverage and capacity improvements through the introduction of new nodes called relay stations (RS). IEEE 802.16j network has the potential to deliver higher capacity networks at lower cost than more conventional single hop wireless access network. The joint optimization of relay stations and base stations is one of the network planning content for mobile network operators. Because the relay stations can be developed at significantly lower cost than base stations, given a set of candidate sites and network coverage demand, the number of base stations which need deploying can decrease through the joint optimization of relay stations and base stations, and the total cost of the network construction can be reduced. To solve the problem of location planning of base station and relay station in IEEE 802.16j relay network, a solution of location planning based on immune algorithm is proposed. The mathematical model of location planning is expounded, the framework of immune optimization algorithm is given, and simulation experiments are conducted to validate the algorithm. Experimental result shows that the proposed solution of location planning can obtain good network capacity with low cost of network construction, and has the advantages of good application value.
    Related Articles | Metrics
    Cited: Baidu(1)
    Minimized Upper Bound for #3-SAT Problem in the Worst Case
    Zhou Junping, Yin Minghao, Zhou Chunguang, Zhai Yandong, and Wang Kangping
    Abstract393)   HTML0)    PDF (854KB)(350)       Save
    Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula. The rigorous theoretical analysis of algorithms for solving #SAT have been proposed in the literature. The time complexity is calculated based on the size of the #SAT instances, depending on not only the number of variables, but also the number of clauses. Researches on the worst case upper bound for #SAT with the number of clauses as the parameter can not only measure the efficiency of the algorithms, but also correctly reflect their performance. Therefore, it is significant to exploit the minimized upper bound of #SAT problem in the worst case using the number of clauses as the parameter. In this paper, we firstly analyze the CER algorithm which we have proposed for solving #SAT and prove an upper bound O(2\+m) regarding the number of clauses as the parameter. In order to increase the efficiency, an algorithm MCDP based on Davis-Putnam-Logemann-Loveland (DPLL) for solving #3-SAT is presented. By analyzing the algorithm, we obtain the worst-case upper bound O(1.8393\+m) for #3-SAT, where m is the number of clauses in a formula.
    Related Articles | Metrics
    Cited: Baidu(1)
    An Intergrated Processing Platform for Traffic Sensor Data Based on Cloud Architecture
    Zhao Zhuofeng, Ding Weilong, Han Yanbo
    Journal of Computer Research and Development    2016, 53 (6): 1332-1341.   DOI: 10.7544/issn1000-1239.2016.20150458
    Abstract684)   HTML0)    PDF (2084KB)(502)       Save
    With the continuous expansion of the scope of traffic sensor networks, traffic sensor data becomes widely available and is continuously being produced. Traffic sensor data gathered by large amounts of sensors shows the massive, continuous, streaming and spatio-temporal characteristics compared with traditional traffic data. How to provide intergrated support for multi-source, massive and continuous traffic sensor data processing is becoming one key issue of the implementation of diversified traffic applications. However, due to the absence of support for spatio-temporal traffic sensor data, it is difficult to develop corresponding applications and optimize the data transfer among different nodes in currenent distributed computing platforms. In this paper, we propose a traffic domain-specific processing model based on spatio-temporal data object. The spatio-temporal data object is treated as the first-class object in the distributed processing model. According to the model, we implement an intergrated processing platform for traffic sensor data based on the share-nothing architecture of cloud computing, which is designed to combine spatio-temporal data partition, pipelined parallel processing and stream computing to support traffic sensor data processing in a scalable architecture with real-time guarantee. Applications of the platform in real project and experiments based on real traffice sensor data show that our platform excels in performance and extensibility compared with traditional traffic sensor data processing system.
    Related Articles | Metrics
    Cited: Baidu(1)
    Extracting Attribute Values for Named Entities Based on Global Feature
    Liu Qian, Wu Dayong, Liu Yue, Cheng Xueqi, Pang Lin
    Journal of Computer Research and Development    2016, 53 (4): 941-948.   DOI: 10.7544/issn1000-1239.2016.20140806
    Abstract847)   HTML1)    PDF (1335KB)(726)       Save
    Attribute-value extraction is an important and challenging task in information extraction, which aims to automatically discover the values of attributes of named entities. In this paper, we focus on extracting these values from Chinese unstructured text. In order to make models easy to compute, current major methods of attribute-value extraction use only local feature. As a result, it may not make full use of global information related to attribute values. We propose a novel approach based on global feature to enhance the performance of attribute-value extraction. Two types of global feature are defined to capture the extra information beyond local feature, which are boundary distribution feature and value-name dependency feature. To our knowledge, this is the first attempt to acquire attribute values utilizing global feature. Then a new perceptron algorithm is proposed that can use all types of global feature. The proposed algorithm can learn the parameters of local feature and global feature simultaneously. Experiments are carried out on different kinds of attributes of some entity categories. Experimental results show that both precision and recall of our proposed approach are significantly higher than CRF model and averaged perceptron with only local feature. The proposed approach has a good generalization capability on open-domain.
    Related Articles | Metrics
    Cited: Baidu(1)
    Realtime Capture of High-Speed Traffic on Multi-Core Platform
    Ling Ruilin, Li Junfeng, Li Dan
    Journal of Computer Research and Development    2017, 54 (6): 1300-1313.   DOI: 10.7544/issn1000-1239.2017.20160823
    Abstract636)   HTML2)    PDF (9190KB)(856)       Save
    With the development of Internet application and the increase of network bandwidth, security issues become increasingly serious. In addition to the spread of the virus, spams and DDoS attacks, there have been lots of strongly hidden attack methods. Network probe tools which are deployed as a bypass device at the gateway of the intranet, can collect all the traffic of the current network and analyze them. The most important module of the network probe is packet capture. In Linux network protocol stack, there are many performance bottlenecks in the procedure of packets processing which cannot meet the demand of high speed network environment. In this paper, we introduce several new packet capture engines based on zero-copy and multi-core technology. Further, we design and implement a scalable high performance packet capture framework based on Intel DPDK, which uses RSS (receiver-side scaling) to make packet capture parallelization and customize the packet processing. Additionally, this paper also discusses more effective and fair Hash function by which data packet can be deliveried to different receiving queues. In evaluation, we can see that the system can capture and process the packets in nearly line-speed and balance the load between CPU cores.
    Related Articles | Metrics
    Cited: Baidu(1)
    Phase Transition Properties of k-Independent Sets in Random Graphs
    Lu Youjun, Xu Daoyun
    Journal of Computer Research and Development    2017, 54 (12): 2843-2850.   DOI: 10.7544/issn1000-1239.2017.20160694
    Abstract460)   HTML0)    PDF (1700KB)(311)       Save
    Phase transition property is one of the most important properties of the theory of Erds-Rényi random graphs. A subset of vertices is a k-independent set in a simple undirected graph G=(V,E) if the subset is an independent set containing k vertex. In order to understand the structural property of k-independent sets in Erds-Rényi random graphs, the phase transition properties of k-independent sets in Erds-Rényi random graphs are investigated in this paper. It is shown that the threshold probability is p\-c=1-n\+-2/k-1 for the existence of k-independent sets in random graph G(n,p) via the first moment method and the second moment method when 2≤k=ο(n). According to this fact that random graph G(n,p) is equivalent to random graph G(n,m) when m is close to pC\+2\-n, the threshold edge number is given by m\-c=[n(n-1)/2(1-n\+-2/k-1)] for the existence of k-independent sets in random graph G(n,m). The simulation results show that the consistence between simulation and theoretical threshold value for the existence of k-independent sets in random graph G(n,p) and G(n,m) when 2≤k=ο(n), and the threshold value is related to the total number n of vertices and the number k of vertices of independent set. However, when k=ω(n), the theoretical threshold value is not consistent with the simulation threshold value for the existence of k-independent sets in random graph G(n,p) and G(n,m).
    Related Articles | Metrics
    Cited: Baidu(1)
    Research on Scaling Technology of Bitcoin Blockchain
    Yu Hui, Zhang Zongyang, Liu Jianwei
    Journal of Computer Research and Development    2017, 54 (10): 2390-2403.   DOI: 10.7544/issn1000-1239.2017.20170416
    Abstract1439)   HTML3)    PDF (2634KB)(2090)       Save
    Bitcoin is a crypto currency introduced by Satoshi Nakamoto in 2008. It has the features of decentralization, cross-border and fixed total amount and has become one of the most widely used crypto currencies. Due to some initial limitations set by the inventor and the following developers, the transaction throughput of the Bitcoin network is much limited. Recently, the transaction throughput has been close to the maximum limit, and the corresponding transaction confirmation time has been greatly increased. Not only this affects user experiences of Bitcoin and limits its usage, but also this puts forward higher requirements for Bitcoin protocol design. Focusing on the challenges of transaction processing performance, this paper aims to promote blockchain capacity and takes a deep research on Bitcoin protocol. Firstly, we do a research on the current network status of Bitcoin, and analyze the transaction delay according to Bitcoin transaction data. Secondly, we analyze the feasibility and effectiveness of on-chain scaling proposals. Thirdly, we analyze mechanics and effects of off-chain scaling proposals. Finally, we analyze the advantages and disadvantages of on-chain/off-chain scaling proposals, and propose a scaling roadmap which meets the community requirements. The recent progress on the Bitcoin scaling shows the correctness of our proposals.
    Related Articles | Metrics
    Cited: Baidu(1)