• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search

2009  Vol. 46  No. 7

Abstract:
Delay-tolerant networks (DTNs) are intermittently-connected networks that may suffer from frequent and long lasting disconnection due to various reasons such as mobility, power management, scheduling, etc. Representative delay tolerant networks include wireless sensor networks using scheduled intermittent connectivity, mobile ad hoc networks, satellite networks with periodic connectivity, village networks, wildlife tracking networks, and pocket switched networks, etc. Due to the broad application prospect, delay tolerant networks attract much attention. However, compared with traditional networks, there are no stable end-to-end delivery paths in delay-tolerant networks, and the routing problem is thus much more complicated. Most of the existing research work also focuses on this problem, and many delay-tolerant network routing algorithms are proposed. In this paper, the state of the art in these algorithms is summarized. At first, the evaluation criterion of performance is introduced. Secondly, different taxonomies for delay-tolerant network routing algorithms are presented. According to the routing strategy, all the algorithms can be categorized into the algorithms based on replication and the algorithms based on forwarding. According to the network model, the algorithms can be categorized into the algorithms based on active mobility model and the algorithms based on passive mobility model. Thirdly, the representative routing algorithms are described for each class of algorithms. Furthermore, the advantages and disadvantages of these algorithms are summarized. Finally the future directions of research in this area are discussed.
Abstract:
Sensor network is frequently deployed in rather complex environment, such as underground parking lots, coal mine channels, etc. which are full of obstacles or corners. Such complicated situation brought a big challenge for protocol design, and made unreliable short distance wireless communication less efficient. Earlier protocol design for WSN paid more attention to the connectivity in on/off characteristics, neglecting link quality often in a middle value of 0 and 1 in the real world. As more knowledge on link feature of short distance wireless communication is studied, people begin to recognize the big impact brought by this problem and try many methods to improve the forwarding performance of protocol, such as link estimation, disjoint multi-path, braided multi-path and multi-link strategies, etc. Both uni-link and multi-link forwarding strategies are modeled and analyzed, and a satisfying condition is deduced. If the condition were met, multi-link strategy should perform better than uni-link. By the way, collision model for both strategies is also established and studied, through which the funnel bottleneck is shown and the impact on the performance of both energy efficiency and delivery rate is analyzed. Finally, simulations are conducted on the NS2 platform, and the satisfying condition and collision characteristics derived from the model are verified over simulation network.
Abstract:
As an important active queue management scheme, PI controller eliminates the steady state error of queue length with the introduction of integral factor, improving throughput while reducing queuing delay. But it can not adapt its control parameters when network state varies with time. So when traffic flows change, the PI controller can not converge quickly with the traffic flows. Based on the thoughts of detecting and estimating network state information through the network flows, the amount of active flows, average round trip time (RTT) and capacity of outgoing link are estimated. The amount of active flows is estimated by calculating the inverse proportion of hit function probability. The network capacity is estimated by average data packets per unit time. The average RTT is estimated by the relation equation of the amount of active flows, network capacity and data packets loss probability in the steady state. A new scheme called fast convergent PI (FCPI) is proposed based on TCP-AQM system model and the network state parameters estimation, in which the controller can be adjusted according to the real-time network state. Simulation results show that the new scheme not only improves the convergent rate of PI controller, but also appears to be robust and effective in different scenarios, which makes it more suitable for high speed routers.
Abstract:
As the standard of worldwide interoperability for microwave access (WiMAX) technology, IEEE 802.16 defined five classes of traffic services, which are UGS, rtPS, ertPS, nrtPS, and BE, respectively, and introduced a supporting mechanism for quality of services (QoS) into medium access control (MAC) layer. But it did not propose the corresponding scheduling algorithm. In order to provide various multimedia communications with QoS guarantee effectively, a two-level scheduling scheme is proposed based on the orthogonal frequency division multiple access (OFDMA) technology and adaptive modulation and coding (AMC) mechanism. This scheduling scheme employes the cross-layer design methodology and can be used in the point-to-multipoint (PMP) WiMAX downlink. According to the value of QoS priority, the first-level scheduler schedules the head of line (HOL) packets in different service class buffers to satisfy the rtPS maximum delay restriction and nrtPS minimum rate requirement simultaneously. After accomplishing the first-level scheduling, the second-level scheduler schedules the HOL packets in different user buffers according to the AMC information and users’ state message for obtaining the users’ rate fairness. Simulation results show that the proposed two-level scheme can meet various QoS requirements of multimedia services well without sacrificing users’ rate fairness, and meanwhile, get the preferable WiMAX system throughput.
Abstract:
There is a lack of an authentication protocol for station roaming in IEEE 802.11s WLAN Mesh. And the shared key between the supplicant and the authenticator is generated by the authentication server in the current authentication protocol EMSA of IEEE 802.11s WLAN Mesh, so all the messages between them are learned by the authentication server. Moreover, the authentication mode based on the shared-key can not provide the perfect forward secrecy, which results in that all the keys generated by the shared-key are exposed. Based on EMSA, through the technique of the three-party Diffie-Hellman key exchange and separate authentication payload, a new authentication protocol is proposed, which can overcome the above shortcomings for station roaming. The new authentication protocol only needs four rounds to realize the mutual authentication and key confirmation among supplicant, authenticator and authentication server. Therefore, the four-way handshake, which is necessary in EMSA, is not required in the new solution. Furthermore, two different authentication modes, the shared-key based mode and the signature based mode, can be expressed by a single protocol in virtue of the separate authentication payload technique. When a different authentication mode is adopted, the protocol remains unchanged. Finally, the security and performance of the new protocol are analyzed. The results show that the new protocol is universally composable-secure, and has a much better performance than the current solutions.
Abstract:
Due to the active defense against the propagation of worms, benign worms are attracting wide attention in the field of worm research. The idea of benign worm is to transform a malicious worm into an anti-worm which spreads itself using the same mechanism as the original worm and immunizes a host. It allows for an active measure of malicious worms that can potentially be deployed with no additional infrastructure in place. A divide-and-rule-hybrid-benign worm is a pattern of hybrid benign worms. It can avoid the additional traffic generated by benign worms in the final stage of benign worms countering against worms. Divide-and-rule-hybrid-benign worms are classified into three subtypes by analyzing their characteristics. The propagation models of these three subtypes are derived under the circumstance of time delay and no time delay, and they depict the process of divide-and-rule-hybrid-benign worms countering against worms. Finally, the simulation validates the propagation models. And there is a conclusion that a composite divide-and-rule-hybrid-benign worm is the most effective approach for containing the propagation of worms under the same infectious condition. These models of benign worms lead to a better understanding and prediction of the scale and speed of benign worms countering against the propagation of worms.
Abstract:
Sparse matrix-vector multiplication (SpMV) is an important computational kernel in scientific computing applications that tends to perform poorly on modern processors with deep memory hierarchy due to its low ratio of the number of floating point operations to the number of memory accesses, and its irregular memory access patterns. Register-level blocking algorithm and heuristic block-size selection algorithm store a sparse matrix as a sequence of small dense blocks and re-organize the computation sequence to compute each block before moving on to the next, thus reuse the elements of vector x to optimize the performance of SpMV. Several key optimization techniques adopted in OSKI software package are analyzed and summarized, and real matrix data performance testing on them is performed. The performance testing indicates that to realize the performance speedup goal of these optimization techniques, the calling times of SpMV kernel must be around 100 times to amortize the overhead of performance optimization. Ten real matrices are tested to compare the performance of the heuristic-register blocking algorithm with the general algorithm. The average speedups are 1.69 on Pentium 4 platform and 1.48 on AMD Athlon platform.
Abstract:
The reliability and security of components block the development of component technology. Currently, component security testing is rarely researched as a special subject, and there are not some feasible approaches or technologies in detecting component security vulnerabilities. Problems with the component reliability and security have not yet been solved. The authors propose a FIM (fault injection model) of component security testing, and then specify some related definitions of FIM model and its matrix specification. A TGSM (test-cases generating based on solution matrix) algorithm of fault injection for component security is proposed based on FIM. The algorithm TGSM generates solution matrix that meets K factors coverage according to the matrix form of the FIM model. All rows data of the solution matrix compose the fault injection test-cases. The FIM model is implemented well based on research projects CSTS (component security testing system). Finally, the experimental results show that the approach which generates the fault injection test-cases of 3 factors coverage is effective. It can trigger the vast majority of the security exceptions by using the appropriate test-cases. FIM is effective and operable.
Abstract:
Traditional approaches of automatically grading student programs do not take into account how a student program answers a given programming task, and can not evaluate how close the source code is to correct solutions. Therefore, a new approach is proposed based on program understanding. A model is implemented based on the combining of the common process and basic strategies of program understanding with the thinking mode of manual grading. A set of typical model programs representing the correct way to implementing the programming task are provided to match with a student program. Firstly, the student program and model programs are transformed into an intermediate representation called system dependence graph. Then, program normalization is performed on the system dependence graphs to remove code variations, so as to recognize syntactically different but semantically equivalent constructs. Finally, the normalized system dependence graph of the student program and those of model programs are matched and a mark is given. This approach has been applied to an on-line examination system for the programming language C. Test results show that it can evaluate how close a source code is to correct solutions in terms of syntax and semantics, and it has a high grading accuracy.
Abstract:
Workflow model must be described correctly to guarantee successful execution at runtime. So verification technology of model is important and can be classified into syntactic, structural, and semantic. The strictest and highest-level verification is the semantic one which has not been solved well yet in the workflow research. Futhermore, correctness of control logic depends on business semantic, and is one of the influence factors on structual soundness. By analyzing business rules and their constraint parts described in workflow model, business rules semantic can be formalized to expression. So verifying semantic integrality of business rules can be converted into that of constraint sets. Constraint sets are used to decide which path is chosen to execute. If a constraint set described in a condition node in workflow model misses some semantics and expresses redundant or meaningless semantics, these can also cause erroneous structure, which is one of the important factors executing unsuccessfully. Domain of universe coverability theorem and decision-tree-based verification arithmetic are put forward to verify the semantic integrality of constraint sets, and then the structrual soundness of workflow model is ensured by this method. This verification technology is independent of specific modeling mothods, so that complete commonality, modeling-independentce, and wide applicability are the three advantages of this verification method.
Abstract:
Due to the variable and dynamic characteristics in trustworthy software evaluation demands, and to the experts’ subjective decision-making with bounded rationality, to evaluate multidimensional and multi-scale trustworthy software is an important and challenging research. To solve this problem, a trustworthy software evaluation mechanism using utility based evidence theory is discussed and designed. Firstly, a dynamic construction model for trustworthy index tree is defined, including an open index node database and an index tree generation algorithm. So the model can find the correct index system which can reflect the real trustworthy evaluation demands and can be used by trustworthy evaluation agents or users. Secondly, utility theory is used to achieve quantitative and qualitative evaluation information pre-processing, namely, multi-dimension trustworthy evaluation information smooth and consistency transformation. On this basis, a trustworthy software evaluation evidential reasoning algorithm, which can analyze and solve the valuable data from the original information carried by the trustworthy evaluation experts, is given based on distributed evaluation framework and Dempster rule. Finally, the analysis and experimental results of this paper show that the validity and rationality of the proposed method is especially suitable to large scale industrial detection software. It can impel the further research of software trustworthiness evaluation theory under complicated environment.
Abstract:
More researchers have paid attention to conformant planning since conformant planning problems became the benchmarks of the undeterministic track of the International Planning Competition in 2006. Nowadays, all the planners which can be used to deal with conformant planning problems find a valid plan via searching the belief states in belief state spaces, which could be expressed as implicit structures or explicit formulas. In this paper, a novel modal logic-based planning framework is proposed by analyzing the grammars and modal semantics of the conformant problem definition. A conformant planning problem is translated into a series of automated reasoning problems based on the modal logic axiomatic system D. Based on this framework, two modal logic-based encodings and corresponding modal formulae sets are proposed by constructing the axioms and inference rules of the corresponding axiomatic systems, such as explanatory frame axioms, action necessity rules, etc. It ensures that the theorem proving processing of corresponding theorems in axiomatic system D is equivalent to the planning processing of the original problem. The soundness of this translation method is also illustrated. This method can be viewed as a novel attempt following the well known translation methods, which translate the original problems into SAT, CSP, linear programming, model checking, and so on.
Abstract:
Qualitative spatial reasoning (QSR) studies the spatial relations of objects in the space. Most QSR research work aims at the spatial relations between single dimensional objects (i.e. region-region relation or line-line relation). Spatial relation of multi-dimensional objects is the situation that point, line and region objects appear in one scene at the same time. Spatial relation of multi-dimensional objects is very common in geographical information systems (GIS). Qualitative spatial relation of multi-dimensional objects has very important theoretical meaning and practical value, but very few work in QSR was focused on it before. By improving the previous research on multi-dimensional region connection calculus model, a multi-dimensional topological model (i.e. MRCC5) is given, and the complexity of its constraints satisfaction problem is studied. The multi-dimensional extension of qualitative size relation called MDS model is proposed, its reasoning problem is also investigated. Based on the above work, a model which combines RCC5 and MDS is proposed, so does its reasoning algorithm. This work extends the related work in qualitative spatial reasoning to the multi-dimensional field. The reasoning of multi-dimensional topological relation is improved. It is the first time that multi-dimensional size relation model and relation model integrating topology and size are studied.
Abstract:
Along with the constant development of the Internet and the ever-increasing amount of data, the role of search engines has become increasingly evident. More users rely on search engines to find the information needed. In order to cluster the search results more effectively, thus facilitating the positioning of information among the original unstructured results, the authors propose a text clustering algorithm—the LSOM algorithm, which is based on the self-organizing map (SOM) and the latent semantic index (LSI) theory. It requires no predefined number of clusters and has the advantages of flexibility and preciseness. For high-dimensional texts feature space, LSI is performed to discover a new low-dimensional semantic space, in which the semantic relationship between features is strengthened while the noisy features in the original space are weakened or eliminated. In addition, the clustering process is more efficient due to the effective dimension reduction. In LSOM, a cluster label extraction method is also developed. The extracted labels are further used in resolving the cluster boundary detection problem, which is non-trivial when SOM is applied in text clustering. Experimental results show that the LSOM algorithm performs better than those existing counterparts in evaluation measures of both cluster label and F-measure.
Abstract:
Temporal multi-document summarization (TMDS) aims to capture the evolving information of relevant document sets across periods. Different from the traditional static multi-document summarization, it handles the dynamical collection relevant to a topic. How to resolve the key problems in the temporal context is a new challenge. This paper focuses on how to summarize the series news reports by a generic and extractive way. According to the temporal characteristics of series news reports at different levels of topical detail, a content selection method based on the macro-micro importance discriminative model is proposed. This method mines the temporal characteristics of series news reports from macro and micro views in order to provide the cue for content selection. Firstly, important time points are selected based on the macro importance discriminative model; then important sentences are selected by the micro importance discriminative model; and then these two models are integrated into a macro-micro importance discriminative model. Lastly, summary sentences are ordered chronologically. The experimental results on five groups of Chinese news corpus prove that this method is effective. It also shows that the macro and micro temporal characteristics of series news have the recursive property to some extent and macro coarse filtering helps to the content selection of TMDS.
Abstract:
Attention assigns the limited processing resources to the information which needs processing finely. This feature can enhance the detection ability and response speed of the visual information processing. In the past few years, the allocation mechanism of attention has received a lot of attention and many approaches have been developed. The authors propose a computer model of directional visual attention, which can simulate biological visual attention systems based on their physiological structure. The proposed model has three layers: visual image inputting layer, object encoding layer and control layer. Visual image inputting layer transforms visual image into retina image through imaging mechanisms of biological retina and adjusts the radius of fovea to cover the attention object currently. Object encoding layer encodes objects in retina image by edge detection with maximum gradient and c-means algorithm firstly, then extracts the fundamental information about each object including color, central, set of edge pixels, etc. Control layer transfers the attention focus according to the characteristics of directional objects in knowledge base. In the control layer the space information is used, the object which is nearest from the focus of attention currently in space position has high priority in processing. Experimental results indicate that the attention focus can be transferred to directional objects with the proposed model.
Abstract:
Scene image categorization is a basic problem in the field of computer vision. A content correlation based scene image categorization method is proposed in this paper. First of all, dense local features are extracted from images. The local features are quantized to form visual words, and images are represented by the “bag-of-visual words” vector. Then a logistic-normal distribution-based generative model is used to learn themes in the training set, and themes distribution on each image in the training set. Finally, an SVM based discriminative model is used to train the multi-classifier. The proposed approach has the following advantages. Firstly, the approach uses logistic normal distribution as the prior distribution of themes. The correlation of themes is induced by the covariance matrix of logistic normal distribution, which makes the theme distribution of subjects more accurate. Secondly, manually tagging image content is not required in learning process, so as to avoid the heavy human labor and subjective uncertainty introduced in the process of labeling. A new local descriptor is proposed in this paper, which combines the gradient and color information of local area. Experimental results on natural scene dataset and manmade scene dataset show that the proposed scene image categorization method achieves better results than traditional methods.
Abstract:
Several classifiers are usually combined to promote the precision of classification in machine learning. The effectiveness of the combination is proved by the weak learning theory. The linear combination of classifiers, called weighted voting, is one of the most common combination methods. The widely-used AdaBoost and Bagging adopt weighted voting methods. The effectiveness of classifier combination and the problem of best combination both have to be solved. The coefficient selection condition for the effectiveness of classifier combination and the coefficient formula of best combination problem are given when there are many classifiers and every classifier is not relevant to other classifiers. The error of combined combination classifier is analyzed. It is concluded that the classification error rate drops exponentially with the increase of classifiers even simple voting method is adopted when the classification error rate of every classifier has unified boundary. Based on this conclusion, according to AdaBoost, some new integrated learning algorithms are proposed. One of them is to directly and rapidly promote the classification precision of the combined classifier. The reasonableness and scienctific nature of this algorithm are analyzed. It is the extension of traditional classifier trading and selecting method to minimize the classification error rate. It is proved that the combination in AdaBoost is efficient and sometimes is the best combination. A classifier combination theory and conclusion on multi-classification problem are given, which are similar to that on two-class classification problem, including effective condition, best combination, error estimation, etc. Moreover, AdaBoost is extended to some extent.
Abstract:
Recently, privacy preserving data mining has been a hot topic in data mining research community. The data privacy preservation is one of the important issues of privacy preserving mining. Many methods have been presented for this problem. However, the existing methods often have the shortcomings of high loss distortion and less aggregate query accuracy on the private or anonymous dataset. In this paper, based on the existing (alpha, k) privacy preservation model, an improved method, Alpha+, is presented using the lossy decomposition theory of relation database. Alpha+ firstly use (alpha, k) method to generate the private dataset for the original database. Then the private dataset is projected into two separated tables NSS and SS. The two tables are related with each other through the same relation attributes and then the redundancy information of lossy join of them can be used to preserve the private information. Secondly, Alpha+ merges the same tuples of NSS and SS tables to reduce the size of them, and the modified NSS and SS are finally published. The security comparison analysis between Alpha+ and the other similar methods is also given. The experimental results show that Alpha+ outperforms the existing methods in terms of aggregate query accuracy on the private dataset.
Abstract:
When there are multivalued dependencies among data elements for XML document under incomplete information circumstances, data redundancies and abnormal update often occur on condition that there are no constraints in XML document. In order to avoid data redundancies and abnormal update, the normalization of XML document under incomplete information circumstances is discussed based on XML strong multivalued dependencies. The definition of XML strong multivalued dependencies for the incomplete XML document tree is formalized based on the equivalence and the consistency of node’s information. Based on the hierarchical XML strong multivalued dependencies, the condition of satisfying an XML strong multivalued dependency normal form for the incomplete XML document tree is proposed. Justifying theorem that ensures redundancy free in the incomplete XML document tree is given, and an algorithm for normalizing an incomplete XML document tree is presented, and then the analysis of time complexity is given. Finally, according to the proposed theorem, the analysis of an instance is discussed. The results in this work can deal with data redundancies aroused by the hierarchical XML strong multivalued dependencies in XML document under incomplete information circumstances, and help carry out the objective of the normalization theory of XML database.
Abstract:
With IC technology scaling down into nanometer regime, voltage disturbances severely influence the performance of VLSI circuits. Both via mismatches in manufacture and electro-migrations of Cu interconnect wires in working ICs may provide many candidates for open defects in power/ground networks, which in turn significantly impacts voltage disturbances. In order to quickly test these open defects, it is imperative to efficiently analyze the defects’ influences on P/G networks. Therefore, a single defect successive over-relaxation algorithm (SD-SOR) is firstly proposed in this paper to fast analyze nodal voltage drop distributions of P/G networks resulted from single open defect. Based on the voltage distribution of a defect-free P/G network, SD-SOR only needs to relax on a few nodes that surround the defect and thus suffer visible influences from the defect. Compared with the traditional global SOR method that orderly relaxes all nodes, SD-SOR shows the following advantages. The first advantage is locality. For each open defect, SD-SOR relaxes from the nodes connected with the defect to those surrounding nodes as wave transmission, while the wave stops at the nodes whose IR droop variation is less than a pre-assigned threshold. The second one is efficiency. SD-SOR not only relaxes a small part of the nodes in P/G networks but also needs much less relaxation iterations. The third one is high accuracy. Because most nodes are far away from the defect and suffer invisible influences, SD-SOR can obtain high enough accuracy through relaxing only a few surrounding nodes. Experimental ressults show that the proposed SD-SOR method is 57 times faster than the pre-conditional global SOR method with a maximum error of 0.95%.