ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 November 2006, Volume 43 Issue 11
A Multiobjective Evolutionary Algorithm for Grid Job Scheduling of Multi-QoS Constraints
Zhang Weizhe, Hu Mingzeng, Zhang Hongli, and Liu Kaipeng
2006, 43(11):  1855-1862. 
Asbtract ( 332 )   HTML ( 2)   PDF (764KB) ( 450 )  
Related Articles | Metrics
Grid scheduling and resource management potentially involve the interaction of many human players such as end users and resource administrators. Such players result in different, often contradictory, criteria and make the process of mapping jobs to resources difficult or even impossible. Focusing on the independent jobs, the multiobjective job scheduling problem of multi-QoS constraints is proposed and transformed to the general multiobjective combinatorial problem. An advanced evolutionary algorithm is put forward to solve multiobjective grid job scheduling. The evolutionary technique is used to find the non-dominated set of solutions and distribute them uniformly in the Pareto front so that the best compromise scheduling solution can be found. It is shown via simulation that the algorithm performs better than the QoS-Min-min and QoS-Sufferage in the user-QoS performances such as time-dimension, reliability-dimension, security-dimension QoS utilization and the system-QoS performance such as dropped job numbers.
A Dynamic Access Control Scheme Across Multi-Domains in Grid Environment
Chen Ying, Yang Shoubao, Guo Leitao, and Shen Kai
2006, 43(11):  1863-1869. 
Asbtract ( 361 )   HTML ( 0)   PDF (673KB) ( 651 )  
Related Articles | Metrics
A new dynamic access control scheme, in which an entity's behavior and privilege can be dynamically and flexibly linked, is proposed to improve scalability and to restrain cheating of resource sharing in traditional role-based access control scheme in grid environment. This new scheme can adjust a user's role by his behavior, setting up relationship between a user's privilege and his behavior. Combining this access control scheme with a trust model, the concepts of converting parameter and dynamic-role are introduced and applied to virtual organization (VO), in which converting parameter is designed to realize the reputation conversion among multi-domains. Simulations show that the system can easily realize access control, restrict malicious behavior of entities, and improve the scalability.
A Dynamic Fault Detection Algorithm under Grid Environments
Tian Dong, Chen Shuyu, and Chen Feng
2006, 43(11):  1870-1875. 
Asbtract ( 465 )   HTML ( 0)   PDF (431KB) ( 554 )  
Related Articles | Metrics
Aimed at the problems that grids are more prone to failures, and existing failure detection algorithms can not satisfy the unique requirement of grids, a dynamic failure detection algorithms is presented. According to the characteristics of grids and unreliable failure detection theory, the authors establish models for grid systems and fault detection respectively; Combining heartbeat strategy with grey prediction method, they also design a dynamic heartbeat strategy, and present the prediction model, as well as a real-time prediction strategy. Furthermore, on the basis of the dynamic heartbeat strategy, a fault detection algorithm is put forward for grid environments, and its dependability is analyzed. Finally, simulation results shows that the algorithm is valid and effective, and can be used for fault detection under grid environments.
k Nearest Neighbor Queries Based on Data Grid
Zhuang Yi, Zhuang Yueting, and Wu Fei
2006, 43(11):  1876-1885. 
Asbtract ( 409 )   HTML ( 0)   PDF (879KB) ( 398 )  
Related Articles | Metrics
Proposed in this paper is a novel k-nearest neighbor query algorithm based on data grid, called the GkNN. Three steps are made in the GkNN. First, when user submits a query vector and k, the vector reduction is performed using DDM index. Then the candidate vectors are transferred to the execution nodes by using vector package technique. Furthermore, the refinement process is conducted in parallelism to get the answer set of the candidate vectors. Finally, the answer set is transferred to the query node. The proposed algorithm uses vector reduction algorithm, vector package technique and pipelined parallelism to solve the problem of heterogeneity of network bandwidth between nodes on the data grid. The analysis and experimental results show that the performance of the algorithm is good in minimizing the response time by decreasing network transmission cost and increasing parallelism of I/O and CPU.
Service Discovery Framework Using Fuzzy Petri Net
Zhang Guangsheng, Jiang Changjun, and Ding Zhijun,
2006, 43(11):  1886-1894. 
Asbtract ( 530 )   HTML ( 0)   PDF (1047KB) ( 517 )  
Related Articles | Metrics
The semantic Web, as presented under W3C recommendations, deals with hard semantics in the description and manipulation of crisp data. It does not have the capability to process soft semantics, but much of real world knowledge consists of uncertain or imprecise information. In order to bridge the gap between human understandable soft logic and machine-readable hard logic, a service-oriented multi-agent framework is proposed to better integrated Web services and agents. In the proposed service discovery framework, agents are classified into three types: service-agent, request-agent and broker. Two key issues in the framework are discussed: a service description language and a service matchmaking mechanism. A fuzzy Petri nets-based service description language is proposed as a specification to publish or request for a service, and transition is used to represent a service or request; input places denote preconditions expected to hold before performing the services, and output places denote effects expected to hold after performing the services. Meanwhile, through ontology's class hierarchy, a semantic-based service matchmaking is given, which can find an appropriate service for a request. Degree of truth is used to quantify the service level that the service can satisfy a request, that is, supporting loose matching.
An End-User-Oriented Approach to Exploratory Service Compostion
Han Yanbo, Wang Hongcui, Wang Jianwu, Yan Shuying, and Zhang Cheng,
2006, 43(11):  1895-1903. 
Asbtract ( 386 )   HTML ( 0)   PDF (888KB) ( 622 )  
Related Articles | Metrics
Service composition is a prevalent means to construct service-oriented, loosely-integrated and agile applications. Current service composition technologies are mostly developed for the situations where business requirements are definite and business processes can be predefined. However, there exist many problem-solving cases in which people are not totally clear in advance how to compose applications, but have to act in a try-and-error manner. A novel approach to dealing with the challenge is proposed. Focuses are on the programming model supporting exploratory service composition as well as a dynamic service recommendation mechanism. The proposed approach is evaluated from the users' perspective.
A Protocol of Bidirectional Authentication and Authorization for IP Multicast
Li Xiaojian and Zhao Qinping
2006, 43(11):  1904-1913. 
Asbtract ( 393 )   HTML ( 1)   PDF (692KB) ( 459 )  
Related Articles | Metrics
As an essential to the protocol, a multicast-based logical heartbeat clock and a challenge-response authentication based on secret hopping are presented, and then the logical clock is combined with independence dual nonce chain of the messages. According to the above, the principal's secret, the authorization, and the messages are one-off effectiveness in run; the messages are also context sensitive. The formalization analysis and test result indicate that this protocol can complete the bidirectional authentication and authorization, can authenticate a member who departs from the secret communication without any statement, and can resist straight or deflective run internal replay attacks, as well as the man-in-middle attacks.
A Non-Repudiation Protocol for E-Mail and Its Formal Analysis
Peng Hongyan, Li Xiaojian, Xia Chunhe, Deng Jianfeng, and Zhou Xiaofa
2006, 43(11):  1914-1919. 
Asbtract ( 557 )   HTML ( 0)   PDF (392KB) ( 603 )  
Related Articles | Metrics
There are different goals in a non-repudiation protocol to be considered for different application. However, there are some goals to be considered in a non-repudiation protocol for E-mail: non-repudiation of both parties; fairness to both message sender and message receiver with respective to their control over the completion of a transaction; the degree of trust on a third party; keeping E-mail confidentiality; and decreasing the number of interaction. In this paper, a new non-repudiation protocol for E-mail is proposed for resolving the existent problems: unfairness, confidentiality not well protected, and many numbers of interaction by using formal analysis, and the new protocol is proven to be able to provide non-repudiation of sender and receiver, fairness and validity of evidence.
DWT Domain Blind Watermark Detection Based on Weak Signal Detection Theory
Sun Zhongwei, Feng Dengguo, and Wu Chuankun
2006, 43(11):  1920-1926. 
Asbtract ( 475 )   HTML ( 0)   PDF (571KB) ( 563 )  
Related Articles | Metrics
The performance of a watermarking scheme relies heavily on the design of the detector. However, most of watermark detection algorithms in the literature are neither with strong theoretical grounds, nor are they optimum. Proposed in this paper is a new discrete wavelet transform (DWT) domain watermark detection scheme, with the theory of weak signal detection in non-Gaussian noise as its theoretical grounds. Special attention is paid to the case where embedding strength parameter of the watermark signal is not known at the detection stage. First, generalized Gaussian distribution (GGD) is chosen to statistically model the wavelet coefficients of the detailed sub-bands data. Then, the model of deterministic signal detection with unknown parameters is utilized to formulate the watermark detection. As a result, an asymptotically optimal detector is constructed. The performance analysis of the new detector shows that it can achieve the constant false alarm rate property. The theoretical analysis is validated through experimental results. And the superiority of the novel detector over conventional detection methods is also confirmed.
Congestion-Based RoQ DDoS Attacking and Defense Scheme in Mobile Ad Hoc Networks
Ren Wei, Liu Tenghong, and Jin Hai
2006, 43(11):  1927-1932. 
Asbtract ( 362 )   HTML ( 1)   PDF (759KB) ( 563 )  
Related Articles | Metrics
Congestion-targeted RoQ (reduction of quality of service) DDoS (distributed denial of service) attacking is discussed in details for the first time. The principle of attacking is pointed out on the basis of the analysis of network capacity. The four categories of the attacking patterns such as pulsing attacking, round robin attacking, self-whisper attacking and flooding attacking, are also described. The defense schemes are proposed, which include the detection of three signals, such as RTS/CTS packets, signal interference frequency and retransmission times, and response scheme with ECN (explicit congestion notification) marking method. The extensive NS2 simulation results show that the pulsing attacking mode leads to the great jitter of the goodput and delay. The increasing of delay and decreasing of goodput becomes obvious with the addition of attacking flows. The delay performance goes up to 110 times and goodput performance drops down to 77.42% when five attacking flows with the same rate occur. The complicated topology is more vulnerable and the distribution of attacking nodes will generate more obvious impacts. The dropping packets are also growing corresponding with the addition of attacking flows, because of IFQ (interface queue) overflow and routing overhead.
A Geometric Method of Camera Calibration Based on Single Perspective Image
Yuan Guodong, Qin Kaihuai, and Hu Wei
2006, 43(11):  1933-1938. 
Asbtract ( 346 )   HTML ( 0)   PDF (684KB) ( 1315 )  
Related Articles | Metrics
A geometric method for computing a camera's interior parameters and exterior parameters is presented based on a perspective image. According to the geometric relation of the vanishing points, three vanishing points can be calculated from single perspective image. Then the camera's relative position and orientation can be achieved using the perspective projection theory of vanishing points. Meanwhile, the camera's real position and the rectangular vertices in 3D space can be recovered if the edge lengths of the rectangle are known. Finally, the effective focal length can be drawn in terms of the perspective projection. This method is simple and easy to be implemented. Extensive simulations as well as real experiments show that this method has merits of high precision and strong robustness.
A Restoration Algorithm for Images Contaminated by Impulse Noise
Liu Peng, Zhang Yan, and Mao Zhigang
2006, 43(11):  1939-1946. 
Asbtract ( 365 )   HTML ( 0)   PDF (591KB) ( 534 )  
Related Articles | Metrics
In this paper, an image restoration algorithm that is based on the self-similarity gray value adjustment method is proposed for the images contaminated by impulse noise. In the preliminary phase, a selective median filter is used to detect and suppress impulse noise. Then a gray value adjustment method based on self-similarity is employed for eliminating the pixels relativity that is produced by the median filter. This algorithm uses an adaptive regularization method to restore the images. The regularization parameters are adaptively obtained through both the statistical parameters of image local region and the competition factors that are produced during the gray value adjustment procedure. The experimental results show that the restored images that are obtained by the proposed algorithm have better objective quality and subjective vision effect than that by some conventional methods for images contaminated by impulse noise.
Research on Medical Image Clustering Based on Approximate Density Function
Song Yuqing, Xie Conghua, Zhu Yuquan, Li Cunhua, Chen Jianmei, and Wang Lijun
2006, 43(11):  1947-1952. 
Asbtract ( 461 )   HTML ( 1)   PDF (517KB) ( 640 )  
Related Articles | Metrics
It is difficult to represent and cluster medical image data by mathematic model. In order to address this problem, an medical image clustering analysis method based on approximate density function is designed. This method uses kernel density estimation model to construct the approximate density function, and takes hill climbing strategy to extract clustering patterns. Results of experiments show that it can achieve good effect on real human abdomen medical images.
Efficient Sports Video Retrieval Based on Index Structure
Zhang Jing, Lu Hong, and Xue Xiangyang
2006, 43(11):  1953-1958. 
Asbtract ( 272 )   HTML ( 0)   PDF (1278KB) ( 494 )  
Related Articles | Metrics
Retrieving similar video clips from large video database requires high query efficiency, precision, and recall, and it is a challenging problem in the field of multimedia information retrieval. In this paper, the video stream is segmented into segments by the visual similarity between neighboring frames, and the high-dimensional index structure, the vector-approximation file (VA-file), is adopted to organize the segments. Furthermore, a new similarity measure and query algorithm is proposed, which is based on restricted sliding window to improve the query accuracy. The proposed segmentation algorithm can efficiently represent the details of motion and the new similarity measure can fully take into account the temporal order among video segments. These properties well suit the retrieval of sports videos. Experimental results demonstrate that the proposed video retrieval method is efficient and effective.
An Audio Feature Extraction Method Taking Class Information into Account
Chen Gang and Chen Xinmeng
2006, 43(11):  1959-1964. 
Asbtract ( 381 )   HTML ( 1)   PDF (421KB) ( 568 )  
Related Articles | Metrics
An audio feature extraction method which can improve classification rate by utilizing the class information is proposed in this paper. Contrary to the traditional feature extraction method based on independent component analysis, the mutual information among the dimensions of the feature vectors is computed on each set of data belonging to different class, rather than on the whole training set. The average then is adopted as the contrast function when generating the basic functions of the training space. Experiments show that higher classification rate can be achieved on the new feature vectors extracted by this method, and that the new basic functions produce smaller mutual information among the dimensions of the feature vectors belonging to the same class.
Research on Chinese Linux Input Method Engine Standard
Xie Qian, Jiang Li, Wu Jian, and Sun Yufang
2006, 43(11):  1965-1971. 
Asbtract ( 576 )   HTML ( 2)   PDF (547KB) ( 511 )  
Related Articles | Metrics
There are several input method frameworks along with corresponding implementations coexisting on Linux. The situation indicates inadequateness of existing standardizing effort focusing on input method protocol. However, IME (input method engine) interface standard developed by China Linux Standards Group introduces a brand new way to standardize input method, and makes it possible to develop an adaptive IME working on different frameworks. Based on the survey and synthesis of the existing input method frameworks on Linux, rationale and benefit of IME interface standardization are analyzed. Features and design principles of the standard are expounded. Entities related to input method are partitioned into four portions. Runtime mechanisms of each portion as well as interactions between different portions are analyzed in detail by means of event flow chart. Feasibility of the standard is demonstrated by sample IME implementations. Three sample implementations that cover different aspects of IME interface standard are briefly summarized. The design and developing practice can be referenced in developing compliant IME. Finally, the prospect of promising applications and future working directions are proposed.
A High Efficient Architecture for Motion Estimation Based on AVC/AVS Coding Standard
Deng Lei, Gao Wen, Hu Mingzeng, and Ji Zhenzhou
2006, 43(11):  1972-1979. 
Asbtract ( 390 )   HTML ( 0)   PDF (874KB) ( 564 )  
Related Articles | Metrics
In the new video compression standards, AVC and AVS, the motion estimation adopts many new features such as variable block size searching, multiple reference frames, motion vector prediction, etc, for achieving superior coding performance. However, these new features greatly increase the computation complexity of the motion estimation. To satisfy the high computation requirement, a high efficient architecture is proposed for variable block size motion estimation (VBSME). It has two clocks, the slow clock and the fast clock. The former is used by the periphery of the architecture and the latter is used by the kernel of it. The kernel achieves very high frequency by adopting the fine-grained level for the pipeline implementation. And the pipeline also achieves very high efficiency. Experimental results show that this architecture has powerful computation capability of coding 720×576 picture size at 71fps with the search range of 65×65.
A QoS Capable Fetch Policy for SMT Processors
He Liqiang, and Liu Zhiyong
2006, 43(11):  1980-1984. 
Asbtract ( 360 )   HTML ( 0)   PDF (323KB) ( 582 )  
Related Articles | Metrics
Simultaneous multithreaded (SMT) processors improve instruction throughput by allowing fetching and running instructions from several threads simultaneously at a single cycle. In this paper, a QoS capable fetch policy for SMT processors is proposed and the related issues about QoS management are discussed. The key idea of the policy is using a priority and a flow speed to control the fetching process of every running thread to fulfill the QoS requirement of it. Compared with the pure-priority-based fetch policy, this scheme not only has the QoS capability, but also utilizes the fetch bandwidth in a more efficient way. Thus it provides a better performance and a higher instruction throughput for SMT processors. The implementation of the fetch policy is simple. Execution-driven simulation results show that besides the QoS capability this policy improves the overall performance of the pure-priority-based fetch policy, ICOUNT, by 15% on average.
Research on Synchronization Technology of Fault-Tolerant Computer System Based on Operating System Calls
Xiong Tinggang, Ma Zhong, and Yuan Youguang
2006, 43(11):  1985-1992. 
Asbtract ( 475 )   HTML ( 2)   PDF (537KB) ( 554 )  
Related Articles | Metrics
Synchronization is the key process in implementing fault-tolerant computer systems based on voting. There are some problems, such as being hard to design and produce and being less compatible and difficult to use about traditional fault-tolerant computer systems which adopt hardware-based or application program-based synchronizing technologies. Proposed in this paper is a synchronizing mechanism which is based on the system calls in operating systems. The arithmetic of the synchronizing mechanism is described and the implementation in the Linux operating system is introduced. The synchronizing mechanism is transparent to application programs completely and the synchronizing points needn't be set artificially by programmer. The arithmetic is implemented by the combination of hardware and software. Experimental results show that the synchronizing mechanism is feasible and achieves preferably the goals of the system easy to design and convenient to use.
Post-Refinement of Shot Boundary Detection Based on Manifold Feature
Wang Bei, Yang Linjun, Lu Hong, and Xue Xiangyang
2006, 43(11):  1993-1998. 
Asbtract ( 376 )   HTML ( 0)   PDF (592KB) ( 577 )  
Related Articles | Metrics
Shot is the basic unit of video analysis and retrieval. In order to perform shot boundary detection effectively and to classify different types of shot boundaries, a new feature-shot boundary manifold is proposed. Shot boundary manifold can be regarded as low dimensional local structure embedded in the high dimensional video data. Using this feature, a new post-refinement algorithm is proposed to eliminate the false positives and classify different genuine shot boundaries into four types: cut, dissolve, wipe, and fade. Experimental results demonstrate the effectiveness of the new shot boundary descriptor, and the good performance of post-refinement on shot boundary detection, and the good result of shot boundary classification.
Semantic Analyses of Rough Truth for Axioms in Modal Logic
Yan Lin and Zhang Congpin
2006, 43(11):  1999-2004. 
Asbtract ( 568 )   HTML ( 0)   PDF (431KB) ( 562 )  
Related Articles | Metrics
Rough truth which lies between truth and falsity is one of the five logic values in Pawlak rough logic. Through considering relations between any two approximate spaces among all of them on domain U\+n, a lattice is constructed, which is a kind of algebraic structure, and just using the lattice, a special Kripke model is developed. Within this model, semantic analyses are discussed for axioms of the formal reasoning system in modal logic. Instead of discussing only two values of truth and falsity, the discussions mainly focus on the analyses of rough truth. The conclusions show that the axioms of the formal reasoning system in modal logic are almost rough truth validity within the special Kripke model. Thus soundness would be gained when using some of the axioms to make formal reasoning.
Research on Clustering and Evolution Analysis of High Dimensional Data Stream
Zhou Xiaoyun, Sun Zhihui, Zhang Baili, and Yang Yidong
2006, 43(11):  2005-2011. 
Asbtract ( 437 )   HTML ( 0)   PDF (553KB) ( 666 )  
Related Articles | Metrics
Clustering analysis in data stream has become a hot research issue. In this paper, CAStream, a novel algorithm of clustering and evolution analysis over high dimensional data stream is presented, which is based on subspace. CAStream partitions the data space into grids, gets the grid summary statistics using approximate method, then stores snapshots of potential dense girds by improved pyramid time frame, and finally finds the clusters and analyzes the cluster evolution by the depth-first search algorithm. CAStream can deal with high dimensional data stream, and discover the clusters with arbitrary shape. The experimental results on real datasets and synthetic datasets demonstrate the promising availabilities of the approach.
Logical Foundation for Object-Oriented XML Database
Zhang Xiaolin, Wang Guoren, and Liu Huilin
2006, 43(11):  2012-2019. 
Asbtract ( 299 )   HTML ( 0)   PDF (635KB) ( 482 )  
Related Articles | Metrics
XML is emerging as the dominant standard for information exchange and data representation over the Web. The modeling ability of object-oriented methods is strong, such as inheritance, nonmonotonic multiple inheritance, polymorphism, complex data structure, etc. Extending XML with some object-oriented features can improve the modeling ability of XML data model. DTD is extended with element hierarchy, multiple inheritance, overriding, blocking, polymorphism and conflict handling and XML-RL is extended with polymorphic element as well as inclusive element. XML-RL is a rule-based XML query language and it is based on high-level data model. The syntax and semantics of object-oriented XML database are described systemically.
An Value Range Analysis Based on Abstract Interpretation and Generalized Monotone Data Flow Framework
Ji Mengluo, Wang Huaimin, Li Mengjun, Dong Wei, and Qi Zhichang
2006, 43(11):  2020-2026. 
Asbtract ( 403 )   HTML ( 0)   PDF (588KB) ( 654 )  
Related Articles | Metrics
Safe and accurate value range analysis is crucial for compiler optimization. Based on abstract interpretation and generalized monotone data flow framework, a complete framework for value range analysis is proposed in this paper. Different from other value range analysis methods, this framework includes complete definitions, analysis and correctness proofs. Compared with general theory about abstract interpretation, the method focuses on value range analysis, so the analysis and the proof of the analysis is straightforward.
Quality of Service Aware Dynamic Priority Scheduling Scheme for the Mixed Class Multimedia Workloads in the Storage Systems
Li Zhong, Wang Gang, and Liu Jing
2006, 43(11):  2027-2032. 
Asbtract ( 351 )   HTML ( 0)   PDF (563KB) ( 474 )  
Related Articles | Metrics
QADPS (QoS-aware dynamic priority scheduling scheme), a new scheduling scheme for meeting the different QoS requirements of multimedia applications is introduced in this paper. When retrieving data from the storage systems, multimedia applications need the guarantee of QoS. The fraction of requests whose response time exceeds a specific delay limit must be below a certain proportion. Different multimedia applications may have different QoS requirements in the mixed multimedia workloads. Based on the history of a multimedia application receiving service, its QoS failed-distance can be calculated. The basic idea of QADPS is to assign higher priority to the multimedia applications that have shorter failed-distance. Using this scheme, the requests of the multimedia applications with stricter QoS requirements can achieve more chances of service applied by the storage systems. The effectiveness of this approach is evaluated through simulation. Compared with the conventional scheme, QADPS substantially increases the number of the concurrent multimedia applications.