Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2009, Volume 46 Issue 5
Paper
A Linear Programming Algorithm for Wireless Sensor Networks Localization
Wang Shanshan, Yin Jianping, Zhang Guomin, and Cai Zhiping
2009, 46(5):  705-712. 
Asbtract ( 598 )   HTML ( 3)   PDF (1501KB) ( 833 )  
Related Articles | Metrics
Wireless sensor networks are widely applied in many fields. Sensor node localization problem is the basis and prerequisite for most applications. A linear programming algorithm is presented for wireless sensor networks localization. The received signal strength indications (RSSI) and empirical radio propagation model are used to deduce the relationships of the distances between communicable node pairs in a wireless sensor network. And the communication range is used to estimate the distances between communicable paired nodes. These estimated distances are modeled as a set of square constraints by approximating circle to square. And a linear programming problem for these constraints is employed to substitute the programming problem with quadric constraints. A global solution of the linear programming problem yields estimations for the unknown node positions. Then the node ordinatets are obtained. Simulation results show that preferable localization accuracy can be achieved when anchors are distributed near the fringe of the networks. Some analyses are made to validate the influences of anchor distribution, the number of anchors, and the connectivity on the localization error. Furthermore, compared with the convex position estimation for sensor node localization, the linear programming localization algorithm enormously declines the times for solving programming problems, and has smaller localization error when with the same simulation conditions.
Real-Time and Reliable Greedy Geographical Routing for Mobile Wireless Sensor Networks
Zhang Hengyang, Fan Weihong, Wang Ling, and Zhou Dongxiang
2009, 46(5):  713-722. 
Asbtract ( 479 )   HTML ( 0)   PDF (2011KB) ( 585 )  
Related Articles | Metrics
With the growing popularity of positioning devices and other localization schemes, greedy geographical routing protocols have received extensive attention due to their substantial advantages compared to topology based routing protocols. But, lots of data packets are dropped for the phenomenon of temporary communication blindness resulted from fixed period beacon exchange in greedy geographical routing protocols in mobile sensor networks. Based on the right hand rule, the face routing that doesn’t cognized the character of void and transfers the data along the long route path sometimes, thus, the delay of data packets is prolonged. In the paper, a new real-time and reliable greedy geographical routing protocol is proposed. It adopts a new adaptive beacon exchange algorithm and buffer zone based forwarding strategy to eliminate the phenomenon of temporary communication blindness, and adopts a route signs based adaptive void-handle algorithm to mitigating inefficiency in recent greedy geographical routing. Simulation results show that the protocol can acquire high packet success delivery ratio and lower delivery delay for significantly eliminating the phenomenon of temporary communication blindness and by pass the routing void efficiently, especially introducing no more overhead. So the protocol is scalable and applicable to large-scale mobile wireless sensor networks that require high real-time and reliability performance.
BPEC:An Energy-Aware Distributed Clustering Algorithm in WSNs
Zhou Xinlian, Wu Min, and Xu Jianbo
2009, 46(5):  723-730. 
Asbtract ( 720 )   HTML ( 0)   PDF (999KB) ( 610 )  
Related Articles | Metrics
The large-scale deployment of wireless sensor networks and the need for data aggregation necessitate efficient organization of the network topology for the purpose of balancing the load and prolonging the network lifetime. Clustering has proved to be an effective approach for organizing the network into a connected hierarchy. In this paper, a distributed energy saving clustering algorithm BPEC is proposed. Cluster-heads are elected by two probabilities. The primary probability is based on the ratio between the average residual energy of neighbor nodes and the node itself residual energy. The subsidiary probability is the node’s degree. By using BPEC algorithm, the complexity of the entire network broadcasting is O(n), and the complexity of the entire network computing is O(1). It is proved that the cluster head set C by BPEC clustering algorithm is the dominating set of wireless sensor networks G(V,E). It is derived theoretically that the cluster head number of set C has a clear upper and lower bounds. The cluster head set generated by BPEC is proved to be a maximum independent set, which can cover all network nodes. Simulation experiments show that when the network has higher communication coverage density, analysis results and experimental results are very close, which shows that the cluster number of BPEC clustering algorithm is identical to the theoretical value.
CMV-HOT:An Evolution Model of Inter-Domain Routing System Based on the Complex System Theory
Zhao Jinjing, Huang Minhuan, and Zhu Peidong
2009, 46(5):  731-737. 
Asbtract ( 412 )   HTML ( 2)   PDF (819KB) ( 543 )  
Related Articles | Metrics
Understanding the evolution process of the inter-domain routing system precisely is very essential to deracinate the basic problems in it efficiently and thoroughly. With the rapid development of the Internet scale and the diversifacation of the applications in it, the inter-domain routing system becomes an open complex giant system inch by inch. The authors study the different facets which the anonymous systems might meet during their whole lifecycles and present a dynamic evolution model, named CMV-HOT, for inter-domain routing systems based on the complex system theory. The CMV-HOT model classifies the anonymous systems into three types as Hub AS, Transit AS and Stub AS, and simulates the evolution process of inter-domain routing systems from the angle of the complex system theory by analyzing its internal rules and external behaviors. Through the comparison with the collected data in CAIDA BGP tables, some essential parameters, such as the degree of node, the network average path length and the average clustering of anonymous systems, have great consistency with real environment. So the conclusion can be drawn that the CMV-HOT model satisfies the power-law nature and the small world nature at the same time, it has high veracity and practical worthiness in the network research area.
Survey of TCP Improvement over Multi-Hop Wireless and Wired Hybrid Networks
Yan Guofeng and Wang Jianxin
2009, 46(5):  738-746. 
Asbtract ( 466 )   HTML ( 2)   PDF (854KB) ( 531 )  
Related Articles | Metrics
Standard TCP (Transmission Control Protocol) has limitations when it is used in wired and multi-hop wireless hybrid networks environment. Transmission media carrying data present a wide range of characteristics in multi-hop wireless and wired hybrid networks, some of which may cause a degradation of TCP performance. Much work has been conducted on the performance of TCP under different networks environment. A lot of efforts are made to improve the performance of TCP over wireless and wired hybrid networks. Most of them adapt TCP to multi-hop wireless and wired hybrid networks by modifying the standard TCP. Two main approaches are proposed: one focuses on the modification of one layer of OSI model, namely layered design; and the other improves TCP performance by modifying multi-layer, namely cross-layer design. In this paper, some different characteristics between wireless networks and wired networks are analyzed, then some key factors that influence TCP performance under wireless networks are discussed, and some difficult issues are pointed out to improve the performance of TCP over wired and wireless hybrid networks. Some research work about TCP performance under wired and wireless networks environment is analyzed and summarized. Finally, some possible research trends of the development of TCP over multi-hop wireless and wired hybrid networks in the future are proposed.
Code Layout for Phase Prefetch on Instruction Cache
Hu Xiao and Chen Shuming
2009, 46(5):  747-755. 
Asbtract ( 681 )   HTML ( 0)   PDF (1076KB) ( 517 )  
Related Articles | Metrics
Code layout and instruction prefetch are efficient methods to reduce delays on fetching instructions in processors with instruction cache. To reach this, Code layout adjusts relative space positions of execution codes, while instruction prefetch utilizes relative time relations of execution codes. There are seldom researches on combining them together to get better results by now. On-chip trace is a new debug technique that records the whole program path and time marks non-intrusively with special hardware. It connects space relations and time relations of the code execution, therefore it is possible to support the combination of code layout and instruction prefetch. On the platform of YHFT-DSP with instruction cache and on-chip trace systems, instructions are prefetched according to program phase behaviors taken from the program execution path, and function codes are reordered by prefetch layout to achieve sufficient prefetch intervals. Prefetch operations are executed by idle function units in VLIW DSP or by NOP instructions to reduce overheads. Four benchmarks, Jpeg Encoder, float FFT, Lpc and MPEG4 encoder, are tested to evaluate the novel method. Test results show that this method is able to enhance the prefetching performance and reduce instruction cache misses by exploring the phase stability of program path.
An Analytical Model for Large-Scale Storage System with Replicated Data
Mu Fei, Xue Wei, Shu Jiwu, and Zheng Weimin
2009, 46(5):  756-761. 
Asbtract ( 400 )   HTML ( 1)   PDF (665KB) ( 557 )  
Related Articles | Metrics
Nowadays storage systems become larger and larger, so the number of storage devices is increasing rapidly, which makes storage device failure occur quite frequently in large scale storage systems. Data replica technology begins to be adopted prevalently to enhance storage system reliability. When designing a large scale storage system, there are many factors that could affect the reliability of the storage system, such as failure detection latency, storage node capacity selection, data object size design, replica rank selection and so on. On the other hand, system reliability can not be exactly experimented, so a theoretical model is needed to evaluate it. In this paper, an analytical framework is represented to evaluate the reliability for large scale storage systems which adopt replica technology to protect data. Based on the Markov model, this analytical model could provide quantitative answers to measure the impact of a series of storage system design factors on the reliability of storage systems, such as the rank of the replicated data, the capacity of the storage system, the capacity of storage nodes, the size of data object, the repair bandwidth, mean time failure detection latency and so on. Hence, many storage system design tradeoffs could be reasoned by this framework.
An Improved Block-Level Continuous Data Protection Mechanism
Li Xu, Xie Changsheng, Yang Jing, Cao Qiang, and Wei Qinqi
2009, 46(5):  762-769. 
Asbtract ( 546 )   HTML ( 3)   PDF (1181KB) ( 553 )  
Related Articles | Metrics
With the rapid development of storage technology, continuous data protection (CDP) has become an important data protection and recovery method for modern data storage systems now. According to the merits and shortcomings of existing CDP mechanism, an improved mechanism based on TRAP-4 (timely recovery to any point-in-time) is provided. It reserves the basic data logging method of TRAP-4 first, and then inserts a few snapshot data into the recovery chain by a certain interval values d. This mechanism can not only avoid the latent recover chain crash problem, but also can shorten the data recovery time greatly. A quantitative mathematical model is used to analyze its performance and calculate an optimal d value for ST-CDP. This prototype system of block level ST-CDP mechanism has been implemented in a Linux block level device (MD RAID 5 device for Linux kernel). Contrastive experiments among three different CDP mechanisms are presented. The performance test results show that ST-CDP not only have lower storage space overhead and less infection to the storage system than those of the traditional snapshot mechanisms, but also have higher recovery efficiency and reliability than TRAP-4. It is affirmed that ST-CDP is an efficient continuous data protection mechanism with very low recovery cost.
RQIC: An Efficient Similarity Searching Algorithm on Time Series
Jiang Tao, Feng Yucai, Zhu Hong, and Li Guohui
2009, 46(5):  770-778. 
Asbtract ( 3156 )   HTML ( 0)   PDF (1284KB) ( 520 )  
Related Articles | Metrics
Indexing large time series databases is crucial for efficient search of time series queries. An index scheme RQI (range query based on index) is introduced, which includes three filtering methods: first-k filtering, indexing lower bounding and upper bounding as well as triangle inequality pruning. The basic idea is calculating wavelet coefficients for each time series based on Haar wavelet transform. The first k coefficients are used to form a MBR (minimal bounding rectangle). Thus, the point filtering method can be used. In addition, lower bounding and upper bounding features of each time series are calculated in advance, which are stored into index structure. Finally, triangle inequality pruning method is adopted by calculating the distance between time series beforehand. Moreover, a new lower bounding distance function SLBS (symmetrical lower bounding based on segment) and a novel clustering algorithm CSA (clustering based on segment approximation) are proposed. The aim is to further improve the search efficiency of point filtering method by keeping a good clustering trait of index structure, thereby obtaining a better algorithm RQIC (range query based on Index and clustering). A lot of compared experiments are conducted over both synthetic and real data sets for RQIC. The results show that RQIC provides perfect pruning ability and high search efficiency.
CBC-DS: A Classification Algorithm Based on Closed Frequent Patterns for Mining Data Streams
Ao Fujiang, Wang Tao, Liu Baohong, and Huang Kedi
2009, 46(5):  779-786. 
Asbtract ( 537 )   HTML ( 0)   PDF (901KB) ( 586 )  
Related Articles | Metrics
The classification algorithms based on association rules generally generate classification association rules by frequent patterns. As mining frequent patterns often suffer from the problem of combinatorial explosion, the efficiency of the algorithms is low. Moreover, the emergence of data streams has posed new challenges for classification algorithms. In contrast to frequent patterns, the number of closed frequent patterns is less, so that the efficiency of algorithms for mining closed frequent patterns is higher. A novel and efficient closed-frequent-patterns based classification algorithm, CBC-DS, is proposed for classifying data streams. The contributions are listed as follows: (1) a single-pass closed frequent itemsets mining process based on reverse lexicographic order FP-tree is introduced for mining classification association rules, which uses a kind of mixed item-ordering searching policy to satisfy the single-pass requirement of data streams and uses the bitmap technology to improve the efficiency; (2) the concept of self-support for filtering rules is proposed to improve the precision. The experimental results show that the bitmap technology can improve the efficiency of the algorithm about twice at least and the average classifying precision can be improved about 0.5% by using self-support. Eventually, the average precision of CBC-DS is about 1% higher than that of CMAR, and CBC-DS is much faster than CMAR.
Revising the Parameters of Bayesian Network with Multi-Father Nodes from Small Data Set
Wang Shuangcheng, Leng Cuiping , and Cao Feng
2009, 46(5):  787-793. 
Asbtract ( 674 )   HTML ( 0)   PDF (652KB) ( 544 )  
Related Articles | Metrics
Bayesian networks are graphical representations of dependency relationships between variables. They are intuitive representations of knowledge and are akin to human reasoning paradigms. They are powerful tools to deal with uncertainties, and have been extensively used to the representation and reasoning of uncertain knowledge. In the past decades, they have been successfully applied in medical diagnoses, software intelligence, finance risk analysis, DNA functional analysis, Web mining and so on; and have become a rapidly growing field of research. Bayesian network learning is the foundation of its application. It includes structure learning and parameters learning. Research on parameters learning of Bayesian network with multi-father nodes from small data sets is important and challenging. Due to the insufficiency of information, many parameters of multi-father nodes can not be directly estimated. It is the key problem how these parameters can be effectively learned. In this paper, an effective and applied method of learning Bayesian network parameters with multi-father nodes from small data set is developed. Firstly, a small data set is extended by using bootstrap sampling. Then, Gibbs sampling is combined with maximum likelihood tree and Bayesian network respectively. Finally, the parameters of multi-father nodes are learned by revising a part of extended data in a certain proportion iteratively. Experimental results show that this method can efficiently learn a majority of parameters of multi-father nodes.
Gene Selection for Cancer Classification in Microarray Data
Zhang Lijuan and Li Zhoujun
2009, 46(5):  794-802. 
Asbtract ( 497 )   HTML ( 3)   PDF (833KB) ( 611 )  
Related Articles | Metrics
Microarray data has been widely and successfully applied to cancer classification, where the purpose is to classify and predict the diagnostic category of a sample by its gene expression profile. A typical microarray dataset consists of expression levels for a large number (usually thousands or ten thousands) of genes on a relatively small number (often less than one hundred) of samples. Of the tens of thousands of genes, only a small number of them are contributing to cancer classification. As a consequence, one basic and important question associated with cancer classification is to identify a small subset of informative genes contributing the most to the classification task. This procedure is usually called gene selection. Gene selection has been widely studied in statistical pattern recognition, machine learning and data mining. The authors attempt to review the field of gene selection based on their earlier work, introduce the background and the two basic concepts (gene relevance, relevance measure) of gene selection, categorize the existing gene selection methods from statistics, machine learning and data mining areas, demonstrate the performance of several representative gene selection algorithms through an empirical study using public microarray data, identify the existing problems of gene selection, and point out current trends and feature directions.
A Multi-Objective Evolutionary Algorithm Based on Minimum Spanning Tree
Li Miqing, Zheng Jinhua, and Luo Biao
2009, 46(5):  803-813. 
Asbtract ( 559 )   HTML ( 3)   PDF (1767KB) ( 651 )  
Related Articles | Metrics
Fitness assignment and external population maintenance are the two critical parts of multi-objective evolutionary algorithms (MOEA). They affect the two basic objectives (convergence and distribution) of MOEA design to a great extent. It is difficult for many existing MOEAs to obtain satisfying results. In this paper, a multi-objective evolutionary algorithm based on minimum spanning tree (MST_MOEA) is proposed. In the algorithm, a density estimation metric, tree crowding density, is defined. And on the basis of Pareto dominance relationship, the algorithm assigns fitness using two factors, the distance between the current individual and non-dominated solutions set and the tree crowding density of different rank individual. Moreover, if the size of non-dominated solutions set exceeds that of population, the degree information of solution combined with tree crowding density is employed to truncate population. Particularly, the solutions which have higher degree will be eliminated, for the degree in minimum spanning tree represents not only the density of solution, but the situation in population to a certain extent. Finally, the proposed algorithm is applied to nine test functions on different dimensions and has an extensively comparative study with NSGA-II and SPEA2. Experimental results demonstrate that the proposed algorithm indicates good performance in convergence and distribution.
Research on Generalized Fuzzy C-Means Clustering Algorithm with Improved Fuzzy Partitions
Zhu Lin, Wang Shitong, and Deng Zhaohong
2009, 46(5):  814-822. 
Asbtract ( 713 )   HTML ( 0)   PDF (936KB) ( 631 )  
Related Articles | Metrics
Cluster analysis is an important tool of unsupervised pattern recognition. It has been used in diverse fields such as data mining, biology, computer vision, and document analysis. The fuzziness index m has important influence on the clustering result of fuzzy clustering algorithms and it should not be forced to fix at the usual value m=2. In view of its distinctive features in applications and its limitation of having m=2 only, a recent advance of fuzzy clustering called fuzzy c-means clustering with improved fuzzy partitions (IFP-FCM) is extended in this paper and a generalized algorithm called GIFP-FCM for more effective clustering is proposed. By introducing a novel membership constraint function, a new objective function is constructed and GIFP-FCM clustering is derived. Meanwhile, from the viewpoints of Voronoi distance and competitive learning, the robustness and convergence of the proposed algorithm are analyzed. The proposed GIFP-FCM algorithm not only inherits the merits of IFP-FCM, but also generalizes it so that the original limitation on the fuzziness index m can be removed. Furthermore, the classical fuzzy c-means algorithm (FCM) and IFP-FCM can be taken as two special cases of the proposed algorithm, and GIFP-FCM provides a reasonable link between FCM and IFP-FCM. Several experimental results including its application to noisy image texture segmentation demonstrate its average advantage over FCM and IFP-FCM in both clustering and robustness capability.
Region Relations of the Irregular Vague Regions with Holes Based on Vague Sets
Li Song and Hao Zhongxiao,
2009, 46(5):  823-831. 
Asbtract ( 457 )   HTML ( 0)   PDF (562KB) ( 562 )  
Related Articles | Metrics
Representing spatial information of the vague regions and handling the vague region relations are of great significance in spatial database, geographical information systems and artificial intelligence, etc. In real world, the vague information about the vague regions with holes is abounded. Whether the fuzzy points belong to the vague region with holes and the degree of the membership are often uncertain. The existing research on the vague regions can not represent roundly the idiographic vague information of the given points in the vague regions with holes and can not represent the complex region relations of the irregular vague regions with holes. In order to tackle the problem, the region relations of the irregular vague regions with holes are discussed based on vague sets. The conceptions of the vague regions, the vague holes and the atomic-regions are given based on the vague sets; The irregular vague region with holes is divided into some atomic-regions and the spatial relations of the atomic-regions are studied systemically. Based on the spatial relations of the atomic-regions, the irregular vague region relations with holes could be presented. The result produced in this work can deal with the indeterminate membership information of the vague points in vague regions and the region relations of the irregular vague regions with holes.
Describing and Cost Analyzing of Web Services Composition Using PPA
Xiao Fangxiong, Huang Zhiqiu, Cao Zining, Yuan Min, and Zhang Junhua
2009, 46(5):  832-840. 
Asbtract ( 630 )   HTML ( 0)   PDF (938KB) ( 565 )  
Related Articles | Metrics
Process algebra includes a series of formal languages that are suitable to describe concurrent and communication systems including Web services. Nowadays, although process algebra has been effectively exploited for modeling and verification of Web service composition, only functional aspects of Web service composition have been addressed, and non-functional aspects have been ignored due to the fact that process algebra lacks for capability of modeling them. Since quality of service (QoS) of Web services composition is an important non-functional aspect and may include many quality attributes, such as resource, time, fee, etc, an abstract concept, which is cost, is proposed to model quality attribute. CCS(calculus of communicating systems) that is a classical process algebra with this abstract concept is extended, and a new process algebra called PPA(priced process algebra) is proposed. In PPA, actions and states of a process are associated with a priced function respectively. By this way, each action can be tagged with price and each state can be coupled with cost. Syntax and semantics of PPA are then presented. Cost weak bi-simulation of PPA is defined and relationship of it and weak bi-simulation of CCS is then presented. It is proved that PPA extends CCS with cost modeling capability. And an algorithm is proposed to construct cost state space that is used to select Web services composition with optimal cost. Experimental results show that PPA can not only model functional aspects but also non-functional aspects of Web service composition.
An Approach for Measuring Quality of Web Services Based on the Superposition of Uncertain Factors
Yue Kun, Liu Weiyi, Wang Xiaoling, and Li Jin
2009, 46(5):  841-849. 
Asbtract ( 478 )   HTML ( 0)   PDF (829KB) ( 543 )  
Related Articles | Metrics
As an important basis of Web services selection, discovery and composition, the quality of Web services is widely studied and has become a subject of intense debate in recent years. It is necessary to conquer the subjectivity of existing methods for measuring quality of Web services, depict the inherent uncertainties implied in service invocations and the mutual relationships among the deterministic factors. Therefore, invocation rate, success rate and efficiency are defined as three representative factors with respect to the quality of elementary Web services. By statistical computation of historical invocations on elementary services, the quantitative values of the above three factors can be obtained. In this paper, according to the basic ideas of evidence combination and evidence superposition, an approach is proposed to measure the quality of elementary Web services based on the superposition of the above uncertain factors. Based on the mean of corresponding qualities of elementary Web services, another approach is proposed for measuring highergranularity services. Further, a method for determining priorities among Web services is given accordingly. The proposed methods are well consistent with people’s intuition. Performance studies verify their efficiency, scalability and feasibility.
Fuzzy Extractor Based Remote Mutual Biometric Authentication
Zhang Fan and Feng Dengguo
2009, 46(5):  850-856. 
Asbtract ( 542 )   HTML ( 1)   PDF (705KB) ( 597 )  
Related Articles | Metrics
Biometric authentication eliminates the need for passwords, PIN numbers, and other ID’s that are readily compromised. Meanwhile, the network environment provides biometric authentication with more application scenarios. However, too many confines exist in the traditional remote biometric authentication in which the secure channel or localization of biometric authentication process is applied. Fuzzy extractors allow one to extract some uniformly distributed random key in an error-tolerant way from biometric input w and then successfully reproduce the key from any other biometric input w’ that is very close to w. Based on the important secure primitive, a zero-storage mutual biometric authentication scheme on non-secure channel is presented in this paper. A two-party key distribution protocol based on sharing secret is used. Biometric samples are utilized to reproduce the sharing key. With no need of storing and transferring user biometrics, user privacy can be well protected. Additionally, it is pointed out that the proposed scheme is invulnerable to masquerade attacks from both users and servers. Conspiracy attacks from multi-server can also be resisted. Furthermore, the proposed scheme is very scalable. Multi-factor authentication schemes can be generated by integrating password with smartcard. User registration update can also be easily achieved. And the scheme is suitable for applications with high security requirement.
An ID-Based Efficient Signcryption Key Encapsulation Scheme
Lai Xin, Huang Xiaofang, and He Dake
2009, 46(5):  857-863. 
Asbtract ( 748 )   HTML ( 0)   PDF (701KB) ( 544 )  
Related Articles | Metrics
Hybrid schemes in a KEM-DEM structure is regarded as the most natural approach to public key encryption with IND-CCA security and practical efficiency. Traditional KEM is realized by public key scheme, which only provides confidentiality security for session key used by DEM. In 2005, combining the idea of signcryption with the KEM-DEM model hybrid encryption, Alexander proposed a signcryption KEM-DEM model hybrid signcryption primitive. Signcryption KEM means that the senders private key and the receivers public key are used together to generate session key and key encapsulation. Compared with traditional KEM scheme, Signcryption KEM can provide both confidentiality security and unforgeability security for session key. In this paper the definition of signcryption KEM is extended in ID-based cryptography. Based on Sakai-Kasahara identity-based key contracture and elliptic-curves-related hard problems, an instance scheme of ID-based signcryption key encapsulation is proposed. Security properties of the proposed scheme are proven with the random oracle model. The proposed scheme is ID-IND-CCA secure in confidentiality and ID-UF-CMA secure in unforgeability. At the encapsulation phase of the proposed scheme, no paring computing and no MapToPoint hash function are required. According to the recent advances in pairings optimized computing and point reduction, the proposed scheme is not only secure but also has advantage in performance.
Region-Based Image Annotation Using Heuristic Support Vector Machine in Multiple-Instance Learning
Lu Jing and Ma Shaoping
2009, 46(5):  864-871. 
Asbtract ( 774 )   HTML ( 1)   PDF (926KB) ( 626 )  
Related Articles | Metrics
Content-based image retrieval (CBIR) has been a focal point of multimedia technology since the 1990’s, in which automatic image annotation is an important but highly challenging problem. Image annotation is treated as an image classification task in which each class label is considered as a distinct keyword. Keywords are usually associated with images instead of individual regions in the training data set. This poses a major challenge for any learning strategy. A new procedure to learn the correspondence between image regions and keywords under Multiple-Instance Learning (MIL) framework is presented as Heuristic Support Vector Machine-based MIL algorithm (HSVM-MIL). It extends the conventional Support Vector Machine (SVM) to the MIL setting by introducing alternative generalizations of the maximum margin used in SVM classification. The learning approach leads to a hard mixed integer program that can be solved iteratively in a heuristic optimization. In each iteration, HSVM-MIL tries to change the class label of only one instance to minimize the classification risk. Because its classification aims at individual image regions, the algorithm can directly estimate the correspondence between image regions and keywords while most MIL algorithms can not do this. Finally the HSVM-MIL algorithm is evaluated on both image annotation data sets and the benchmark MUSK data sets. Compared with other MIL methods, it demonstrates high performance in classification accuracy.
A Novel Fast Algorithm for MAP Super-Resolution Image Reconstruction
Xiao Chuangbai, Yu Jing, and Xue Yi
2009, 46(5):  872-880. 
Asbtract ( 702 )   HTML ( 1)   PDF (1269KB) ( 620 )  
Related Articles | Metrics
Super-resolution image reconstruction has recently drawn considerable attention within the research area. For some special-purpose imaging devices such as medical imaging, remote sensor imaging and video capturing, the acquired images cannot often achieve a higher resolution because of the limitation of imaging mechanism and imaging sensor. Super-resolution image reconstruction methods attempt to create a single high-resolution and high-quality image from multiple low-resolution observations (or a video sequence) degraded by warping, blurring, noise and aliasing. So far, existing super-resolution methods are all confronted with the problem of slow convergence and expensive computation. To satisfy the requirement of real-occasion applications, a fast super-resolution reconstruction algorithm is built upon the MAP framework. In the proposed algorithm, two improvements are presented to reduce the high computational complexity of the standard MAP algorithm. The first improvement is to compute directly the increment of the MAP objective function as the component of the gradient vector, which avoids the redundant computation of the objective function. The second one is to select the Armijo rule to identify the step size, which avoids the computation of the computationally demanding Hessian matrix. Experimental results show that the computation time is reduced significantly, whereas the solutions convergence is guaranteed and the similar quality is maintained.
The VLSI Design of AVS Entropy Coder
Xu Long, Deng Lei, Peng Xiaoming, Ji Xiangyang, and Gao Wen,
2009, 46(5):  881-888. 
Asbtract ( 513 )   HTML ( 1)   PDF (834KB) ( 470 )  
Related Articles | Metrics
For the hardware accelerator of AVS high definition video encoder (AVS-HD), an efficient VLSI design for AVS entropy coder (2D-VLC) is provided. Firstly, due to the pipeline operation and parallel processing of hardware, the corresponding parallel algorithm from the original serial algorithm based on software architecture is proposed. Secondly, the VLSI design of entropy coder for mode decision is simplified with only logical operations, which can save much hardware memory. In the stage of mode decision, the bit-width of each DCT coefficient is only needed without the knowledge of its whole symbols, so the pure logical operations instead of looking up table required by the final entropy encoder are employed. Using the proposed hardware accelerator of AVS entropy coder, the computing complexity and memory requirement of hardware are both reduced. Moreover, in the VLSI design, the processing of hardware is further accelerated by employing 8-pixel parallel pipelining design. In such pipelining design, 8 DCT coefficients after quantization are processed in one time clock. For VLC of AVS, the CodeNumber of each (run, level) which is derived from zigzag, is obtained by looking up VLC tables. And then, each CodeNumber is mapped into code word written into the final bit stream. Such processing is same as those of other VLC coders, so the VLSI design can be used in other video coding standards, such as H.264/AVC and MPEG2/4. Finally, the result of software and RTL simulation indicate that the VLSI design is absolutely consistent with the AVS standard, and satisfied with the requirement of hardware accelerator.