• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search

2005  Vol. 42  No. 11

Abstract:
Symbolic Dempster-Shafer (D-S for short) theory is presented to handling imprecise and uncertain reasoning. The numerical set of belief degrees [0,1] in classical D-S theory is replaced by a totally ordered scale of symbolic values. After the qualitative mass function and qualitative belief function are defined by qualitative operators, the fundamental relation between them is discussed. The combination of evidence in qualitative way is also discussed in detail. Compared to other related work, there are two essential characteristics of the symbolic D-S theory. One is that the theory inherits the advantages of classic D-S theory on uncertain reasoning by re-defining the essential concepts in D-S theory. Another one is that the qualitative operators involved in the approach are strictly defined by the logic formulas as well as intuitive properties. Consequently, the symbolic D-S theory is more suitable for reasoning about uncertainty and imprecision in the framework of D-S theory.
Abstract:
Description logic is the expression of knowledge based on objects. But the existent description logic doesn't concern about the goal of organization, so it is not suited for the MAS organization model. To solve this problem, a model of MAS organization is given first, which concerns the relationship between organization and suborganization, and the goal of organization. Based on this model, a framework of description logic—ALCA is given, and the conclusion that ALCA is deterministic is proved, and the algorithm of deterministic is given. Finally, an example is given to explain the expressiveness of ALCA.
Abstract:
Multi-issue negotiation is a key problem in electronic bargaining. Multi-agent technology provides effective solution for the problem. Presented in this paper is a multistage-based framework for multi-issue negotiation under time constraint in an incomplete information setting based on rational agents. The framework describes negotiation over the price of multi-issue. In this framework, in order to reduce complexity of the multi-issue negotiation, multi-issue negotiation is decomposed into multistage negotiation, and each stage has the same size (issues number). The order and number of stages are decided exogenously, and the order of issues in each stage is decided endogenously. The framework can give optimal negotiation agenda for a given decomposition of the bargaining problem based on the same size of stages. In particular, the framework can also strengthen the learning capability of participating agents by building learning system (LS) for them. Finally, a prototype system based on the negotiation framework is implemented, and the prototype system proves that the framework is effective.
Abstract:
Combinatorial auctions are efficient mechanisms for allocating resource in complex marketplace. The combination auction with XOR-bids and OR-bids allows bidders expressing their general preference more exactly. Winner determination, which is NP-complete, is the core problem in combinatorial auctions. In this paper, a zero-one programming model and a heuristic algorithm are proposed for solving the winner determination problem with XOR-bids and OR-bids. For searching the non-competitive bids in XOR-bids and OR-bids, new heuristic rules and methods are designed in the preprocessing. The heuristic algorithm is a partheno-genetic algorithm combined with the immune operator. In the immune operator, new heuristics are designed for evaluating the bids and applied in the procedure of vaccines selection and vaccination. Simulation results show that the algorithm achieves good performance in large size problems and the immune operator can improve the searching ability and increase the converging speed greatly.
Abstract:
A research is proposed on journal text categorization with the combination of local linearity and one-class. Local linearity is introduced to determine the samples' low-dimensional manifold, which could be regarded as the distribution of the samples in low-dimensional mapping spaces. At the same time, the border of positive and negative samples is determined by one-class. Compared with Knearest algorithm, linear SVM and one-class SVM, the new algorithm of journal text categorization gives better results in high precision, simple parameter estimation and easy control of risks, which gives an effective approach for the solution of text categorization.
Abstract:
A new hand shape biometric verification method based on curve fitting is proposed, which uses the curve fitting coefficients of the finger contour as features. Curve distance is defined to compare different finger curves and make verification decision. In addition a simplified calculation method based on the curve fitting coefficients is presented. It's shown by experimentation that the sum of false acceptance rate and false rejection rate is below 1 percent. Compared with the existing methods, this method has a high tradeoff performance on the accuracy, robustness and efficiency.
Abstract:
The equation λ=cQ2 has been widely used as a parameter selection model in hybrid video coder control with Lagrange rate-distortion optimization, which is called square model (SM) and deduced with an assumption of λ being constant. The coefficient c is obtained from statistic in different video coding recommendations. However, according to the theoretical analysis, λ shouldn't be seen as a constant. A proposition about this idea is proposed and proved by mathematically deducing and experimental justification. Based on the proposition, a new Lagrange multiplier selection model called quadratic-logarithm model (QLM) is deduced and its parameter is adaptively obtained by the scheme proposed in the paper. Finally, QLM is used in H.264 and a new rate-distortion scheme is proposed. The final simulation result shows that compared with SM, the new multiplier selection model can obtain better rate-distortion performance in general sense (0.03dB ave.) without any further extra computation. Therefore, the multiplier model proposed is more feasible and reasonable than the existing ones.
Abstract:
A low-power, high-speed and minimum-area architecture which performs two-dimension discrete wavelet transform (2-D DWT) of JPEG2000 is proposed by using a line-based and lifting scheme. The architecture consists of one row processor and one column processor. The row processor, which is time-multiplexed, computes in parallel with the column processor, and two pixels can be encoded in one clock cycle. The extensions at the boundaries are implemented by an embedded circuit, and the memory is minimized. The whole architecture, which is optimized in pipelined way to speed up and achieve higher hardware utilization, has been implemented in FPGA, and can be used as a compact and independent IP core for JPEG2000 VLSI implementation.
Abstract:
An image authentication algorithm based on feature of image and hyperchaotic iteration is proposed. The stable feature value is attained by extracting the edge of low-frequency components from wavelet transform of the original image. In order to get the indices set, hyperchaotic iteration is executed by combining the value with the information of watermarking. So the indices set is signed and timestamped by the owner and the trusted third party's private keys. By using the public keys, the image of watermark can be authenticated without the original image. The experimental results show the algorithm is robust to noise adding, filtering, compression, rotation.
Abstract:
The disadvantages of the traditional color image retrieval method based on color histogram are the large data storage and high complexity of computation. And what's more, the retrieval results with the condition of different illumination are not good as expected. So a fast image retrieval method based on the weighted chromaticity histogram is proposed in this paper. The method eliminates the condition of illumination using the illumination invariance model. And the image of 2-dimensional chromaticity histogram after normalization is compressed by wavelet and DCT transform, where the gray value of its pixel has been weighted based on the proportion of the chromaticity represented by the pixel in the raw image. Experiments show that retrieval results using the method presented are improved. And the time taken in retrieval process is much shorter than that when using traditional methods.
Abstract:
A complex sign Doppler estimator (CSDE) is proposed for real-time hardware-based synthetic aperture radar (SAR) Doppler centroid estimation. The CSDE reduces the complexity in Doppler centroid estimation by exploiting the statistical properties of SAR data and applying the complex arcsine law to the correlation coefficient calculation. In fact, SAR data approximates the static complex Gaussian process, so fast estimating techniques for the angle of the normalized correlation can be applied to the estimating process. The proposed new estimator requires only the symbols of the imaginary and real component of the raw data while computing the phase. As a result, very simple estimating arithmetic is needed for the whole computation procedure. Analytical and simulation results show that with the CSDE, not only much less computational cost and hardware implementation area can be achieved, but also comparable performance with the known sign Doppler estimator (SDE) can be maintained.
Abstract:
Bit commitment is an important primitive in modern cryptography and plays a crucial role in complex cryptography protocol (such as zero knowledge proof) design. Mayers,Lo and Chau independently proved that all the existing quantum bit commitment (QBC) schemes are insecure. This result is the famous Mayers-Lo-Chau no-go theorem on QBC. However, this doesn't exclude that there exists computationally secure QBC. In Eurocrypt 2000, Dumais, et al. claimed that computationally secure QBC scheme can be constructed based on any quantum one-way permutations. Utilizing error correcting codes, the quantum bit commitment is extended to quantum multi-bit commitment and it is shown that this proposed scheme has perfect concealing and computational binding properties. Methods to design quantum digital signature and authenticated encryption scheme are proposed based on QBC. Security analysis shows that the proposed protocols are secure.
Abstract:
In (t, n) secret sharing schemes, a dealer splits a secret into n shares and sends a share to each of n participants. If necessary, any t share holders can provide their secret shares together and then recover the secret by using a publicly specified algorithm. In the secret recover phase of such schemes, how to detect the cheating from share holders is an important technical issue. Based on RSA cryptosystem and one-way function, Fei, et al. recently proposed two cheat-proof secret sharing schemes. However, it is shown that in fact their schemes do not provide the function of cheat-proof. The reason is that malicious share holders can easily forge false shares such that the authenticating equality is satisfied. The result is that honest participants will be cheated and misled to believe that the recovered secret is correct.
Abstract:
Through formalized analysis of the situation of network and the target of fault diagnosis, a new method—FTFD for network troubleshooting based on fuzzy event is proposed. By introducing situation-detection function, FTFD can characterize complicated fuzzy fault with accurate mathematics conversion, and "abnormal degree" can be defined by the vector of probability with belief functions. The method can effectively reduce false positives and negative positives. It aims to be applied to real-time fault diagnosis. The operational prototypical system demonstrates its feasibility and gets the effectiveness of real-time fault diagnosis.
Abstract:
Traditional distributed intrusion detection systems have many shortcomings, such as heavy interdependence of components and weak reliability of these systems. AAADIDS, which is an acronym for autonomous-agent-based adaptive distributed intrusion detection system, is proposed to solve these problems. The components functions are put up and components collaborations in the network are discussed here. AAADIDS adopts new analysis policies to aim at heavy loads of detection tasks and DCAs to increase system adaptive ability. Compared with the traditional distributed intrusion detection systems, AAADIDS is a more adaptive and efficient system.
Abstract:
Many operating systems have already adopted strong TCP ISN generation methods. However, the probability of successful TCP Reset attack is not only1/2\+{32}as people expected. Based on Paul Waston's “slipping in the window: TCP Reset attacks” method, and combined with the sequence number guessing technology which uses chaotic time series analysis, a new TCP Reset attack method is presented in this paper in order to validate the security of TCP ISN generation methods. The experiment results under Windows operating system indicate that this method increases the success probability of TCP Reset attack, and the attackers can terminatethe established TCP connection by sending only 17 RST packets in 10 ms. Present Operation Systems' TCP ISN generation methods still have serious security risk.
Abstract:
A workflow authorization model based on role and task is first described. The basic idea of this model is that roles and permissions are not connected directly but are put together by tasks. This is more convenient for controlling and managing the granularity of permissions. And then an intuitive formal language called RTCL is proposed, which takes the model as context to specify workflow authorization constraints based on role and task. RTCL uses system functions, sets and variable symbols as its basic elements and is proved to be equivalent to a restricted form of first order predicate logic called RFOPL on semantics. Finally, the expressive power of RTCL is demonstrated by showing how it can be used to express a variety of constraints.
Abstract:
Scalable multimedia bitstreams are very sensitive to packet losses and bit errors in transmission channels, and therefore robust error protection is needed. FEC-based error protection algorithms are discussed in this paper. The key objective of these algorithms is to reasonably allocate bits between source coding and channel coding so that the reconstruction signal quality seen at the decoder side can be optimized. According to error statistics, transmission channels are classified into two models: packet-loss channel and wireless bit-error channel. A uniform problem description is formulated for each model's error protection scheme and optimization objective. Major algorithms in the literature are surveyed and their performance and complexities are analyzed. Finally, future directions are discussed.
Abstract:
To reserve resource at adjacent AP in advance is an important measure for mobile station which runs real-time application such as VoIP when handoff, which can make it have identical QoS as before in wireless LAN. But the present mechanisms either have complicated computing, being not suited for wireless LAN, or have low efficiency, not making reservation well for handoff station. On the basis of the present resource reservation schemes and the wireless LAN 's specialties, a PCF station resource reservation scheme is proposed in this paper, which makes the VoIP real-time station use the PCF mechanism to access the channel. In the proposed scheme, when a mobile finds a new AP, it can request the new adjacent AP to reserve resource for it and at the same time, the adjacent AP also adjusts the size of reserved resource adaptively. Through a simulation to the proposed scheme, it shows that the proposed scheme can make the handoff drop rate more lower than other reservation schemes under the condition of not wasting the wireless bandwidth resource.
Abstract:
Based on a new Web cluster architecture, a resource optimizing scheduling algorithm of differentiated service of double minimum balance is proposed. First, Web requests are dispatched to each back end server by resource balancing in the front scheduler. Second, the priority of Web requests and resource balancing are aggregated to design the scheduling queue of Web requests in back end server. In order to estimate the performance of the algorithm, much simulating experiments are conducted. Compared with other notable scheduling policies such as the separate scheduling strategy, the results indicate that the scheduling algorithm of double minimum balance can dramatically enhance the Web request efficiency by 11% and differentiated service is realized very well, which demonstrates the universality of the resource optimizing scheduling algorithm.
Abstract:
A syntactically correct DTD might be inconsistent in the sense that there exist no finite XML documents conforming to the structure imposed by the DTD. Inconsistent DTDs should be avoided. However, most consistency checking methods of DTDs without integrity constrains proposed now focus on whether there exists any valid XML documents conform to the DTDs and ignore consistency checking of local structures of DTDs, which result in the phenomenon that a consistent DTD may have sub-structures that no valid XML data could conform to. To solve this problem, notion of absolute consistency of DTDs is proposed and factors that lead to absolute consistency are analyzed. Algorithm for checking absolute consistency of DTDs is offered, which has linear time complexity.
Abstract:
With the prevalence of XML, how to make use of RDB to store and query XML documents has become a hot topic and many methods of relational storage of XML documents are proposed. A common object of these methods is to improve the efficiency of XML path query. In this paper, a numbering scheme based relational storage of XML documents is proposed, in which XML path query can be quickly done. Moreover, with this XML storage method, XML documents conforming to different DTD (or schema) can be kept in a same relational table and XML document reconstruction can be done with linear time complexity. Recursive schema handling is also discussed. Experimental results demonstrate that this method can process general path queries, such as path with predicate constraint, faster than methods proposed by XRel, Florescu and Kossman.
Abstract:
An equivalence relation ≡\-{HCov} is proposed based on the Data Cube's semantics and the dimension's hierarchy of its pattern, and then on the basis of the equivalence relation, proceeds the data cube to hierarchical cluster. The advantage of this method not only refers to the preservation of all the aggregate records by using equivalence class, but also means the definition of the classified information as well as hierarchical information. The result of theoretical analysis and experiments indicates that this method can effectively save the storage space, and with clustering information and hierarchical information, it can also provide kinds of OLAP query with high efficiency. Meanwhile, the method has the ability to support some semantic operation such as roll-up/drill-down and rotate, and makes it possible to realize the OLAP query navigation and behavior analysis of OLAP query.
Abstract:
Traditional index structures for XML data can be divided into two categories: the structure summary and nodes location approach. There exist some problems for both kinds of index structure. The structure summary for XML data often has large size, and performs poorly when processing complicate queries. The primary problem for nodes location approach is that it often causes too many joins in the evaluation of long queries. To solve these problems, a dynamic index structure for XML data, DifX is proposed. By utilizing the notion of dynamic bisimilarity, DifX can efficiently support diverse queries. Experimental results show that DifX is an effective and efficient index structure for XML data, and performs much better than traditional index structures.
Abstract:
Data broadcast is a widely accepted data dissemination method for mobile computing systems and has held a lot of interest of relevant researchers. When data broadcast is used to disseminate frequently-updated data it is called updates dissemination. Existing updates dissemination protocols are unsuitable for mobile real-time read-only transaction processing because they don't consider the real-time constraints on both data and transactions in mobile real-time database systems. In this paper a new updates dissemination protocol called hybrid forward multi-version data broadcast(HFMVB) is proposed. HFMVB not only guarantees the consistency of read-only transactions,it also provides higher data currency and lower miss rate by shortening the consistency interval, instantly broadcasting updates and utilizing broadcast on-demand. Simulation results show that HFMVB has better performance compared with the existing updates dissemination protocols and is more appropriate for mobile real-time database systems.
Abstract:
OLAP (online analytical processing) queries usually involve multi-table joining. As a result, how to improve the performance of multi-table joining becomes a key issue for OLAP query processing. A method of distortional multi-table joining index is proposed, which can achieve better performance on the aspect of multi-table joining than parallel multi-joining algorithm and joining index. Based on query model-base, in which queries are expressed in SQL, the algorithm is used to build a series of distortional multi-table joining fact tables and indexes of them. In particular queries, dimensional tables join with distortional multi-table joining fact tables instead of fact tables, and further more, distortional multi-join indexes can be dynamically updated in query processing. Theoretical analysis and experimental results show that distortional multi-table joining index is an efficient method for multi-table joining in data warehouse.
Abstract:
Since architecture description languages are not capable enough in analyzing and validating the dynamic behavior of software architecture, the hierarchical framework for software architecture is proposed in this paper, in which the abstract algebra is used to abstract the components, connectors and architectural configuration. Meanwhile, the predicate transition (Pr/T) net is introduced to model the dynamic behavior of software architecture, and an efficient model-checking algorithm based on the linear temporal logic (LTL) is discussed in details. Finally, the approach to model a current control mechanism in an E-commerce system and checking results based on LTL are shown in details. It can be demonstrated by practice that the modeling-and-checking method presented combines the advantages of linear temporal logic with those of Pr/T net, so that it provides novel approaches to analyze and validate the dynamic behavior of software architecture.
Abstract:
Schedulability test is a key problem for real-time scheduling algorithms. Since first introduced by Liu and Layland in 1973, the RM (rate monotonic) algorithm and its extended ones have been widely used in many fields such as digital control, command and control, signal processing, and communication systems. A lot of work has been done to investigate their schedulability, and present their corresponding schedulability tests. As the implementation of real-time systems requires the consideration of the practical issues such as the number of tasks, the period of each task, and scheduling costs, a systematic performance analysis platform is required to give them a thorough evaluation. In this paper, all schedulability tests of RM and its extended ones are summarized, a platform is developed to test and compare their performance, and a thorough evaluation and comparison is made of the schedulability tests through testing. These results and analysis are very beneficial for selecting appropriate algorithms in real-time system design and implementation.