Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2008, Volume 45 Issue 3
Paper
A Review of Middleware for Wireless Sensor Networks
Li Renfa, Wei Yehua, Fu Bin, and Chen Honglong
2008, 45(3):  383-391. 
Asbtract ( 505 )   HTML ( 0)   PDF (419KB) ( 686 )  
Related Articles | Metrics
As a new technology, wireless sensor networks attract considerable attention of academic researchers and people from industry. As the wireless sensor network nodes are usually smaller, and cheaper, and the networks and yet powerful and robust, they can be deployed in large number of applications ranging from habitat monitoring, medical applications, industrial automation to mission critical military applications. However, with the development of sensor networks and sensor network applications, the overall complexity of such systems is continuously increasing. Additionally, compared with traditional networks, wireless sensor networks have some unique characteristics. All of these make programming sensor networks non-trivial. Middleware supports programming abstract which facilitate the programmer task and bridge the gap between the application and the hardware. The characteristics of wireless sensor networks are described, and the problems which the development and design of wireless sensor network middleware layers must address are discussed. Then concrete middleware approaches are presented and discussed according to the programming paradigms used by them, and several classic approaches are compared by concentrating on how well they meet some criteria. Finally, some improvement directions are presented at several aspects of middleware design, such as communication paradigms, QoS support, security management, data processing, etc. The open research issues in this field are also pointed out.
A Load Balance Clustering Algorithm for Multilevel Energy Heterogeneous Wireless Sensor Networks
Wang Xianghui, Zhang Guoyin, and Xie Xiaoqin
2008, 45(3):  392-399. 
Asbtract ( 453 )   HTML ( 0)   PDF (490KB) ( 453 )  
Related Articles | Metrics
In multilevel heterogeneous wireless sensor networks, the initial energy of nodes are random distributed in a certain range, load balancing and energy efficiency are the significant challenges of clustering algorithm for energy heterogeneous networks. Current distributed clustering algorithm is mainly designed for homogeneous or two-level heterogeneous networks, and it is hard to implement load balancing when the nodes energy represents multilevel heterogeneity. So a load balance clustering algorithm LBCA (load balance clustering algorithm) for multilevel energy heterogeneous sensor networks is proposed. The algorithm select cluster head nodes and implements load balance according to the condition of energy distributing, and could prolong the stability period. In the process of cluster head selecting, when the energy is balanced in local area, the nodes which have the lower average communication cost are prior to be the cluster-head nodes, and it is propitious to decrease the total energy cost of local area. When the energy is imbalanced in local area, the high residual energy nodes are prior to be the cluster-head nodes, and it is propitious to implement load balancing. LBCA is compared with primary distributed clustering approach. The simulation results show that in multilevel energy heterogeneous networks, LBCA could better implement load balance and prolong the stability period.
Research on Network Coding: A Survey
Yang Lin, Zheng Gang, and Hu Xiaohui
2008, 45(3):  400-407. 
Asbtract ( 496 )   HTML ( 0)   PDF (379KB) ( 2089 )  
Related Articles | Metrics
Network coding has emerged as a promising technique to increase network capacity, improve the robustness and security as well as better share the available resources of networks, in which intermediate nodes are allowed to perform processing operations on the incoming packets, in addition to just forwarding them. The recent advance in network coding techniques is reviewed in this context. Firstly, the basic idea of network coding is introduced and its benefit over routing under multicast scenario is illustrated. Then the centralized and decentralized coding schemes are classified and analyzed in detail. And some advantages and disadvantages of the two kinds of coding schemes are compared. Furthermore, synchronization, error correction and speed of encoding and decoding for practical network coding are discussed. Moreover, the state of arts in application of network coding in wireless networks, P2P content distribution, distributed storage systems and network security are summarized respectively. Finally, the open issues and challenges for network coding referring to both theory and application in near future are proposed. It is an important trend to design a simple and efficient network coding scheme and combine network coding with other technologies, such as channel coding and modulation, routing, queue scheduling and streaming media, etc.
A Reputation-Based Trust Model Based on Probability and Statistics for P2P Systems
Wu Peng, Wu Guoxin, and Fang Qun
2008, 45(3):  408-416. 
Asbtract ( 351 )   HTML ( 0)   PDF (446KB) ( 543 )  
Related Articles | Metrics
Peer to peer (P2P) technology has been widely used in file-sharing, distributed computing, e-market and information management. One of the fundamental challenges for P2P systems is the ability to manage risks involved in interacting and collaborating with prior unknown and potentially malicious parties. Reputation-based trust management systems can successfully mitigate this risk by deriving the trustworthiness of a certain peer from that peer's behavior history. However, in current trust models employed by the existing P2P systems, the validity of peers' trust valuation is seriously affected by peers' malicious behaviors. For example, there are many strategic cheating and dishonest recommendation. To solve this problem, a novel reputation-based trust model based on probability and statistics for P2P systems is proposed. Referring to subjective trust relationship of sociological theory, the proposed model uses experience-based and recommendation-based trust relationship to compute the trustworthiness of peers. In particular, this model introduces three parameters, namely, experience's time-sensitivity, referee's credibility and recommended information's reliability, and thus it can provide adequate reaction to peers' malicious behaviors. Theoretical analysis and simulation show that the proposed model has advantages in coping with peers' malicious behaviors over the existing P2P reputation systems. It is highly effective in countering malicious peers regarding strategic cheating, dishonest recommendation, and collusion.
A Reputation-Based Trust Model Based on Probability and Statistics for P2P Systems
Wu Peng, Wu Guoxin, and Fang Qun
2008, 45(3):  408-416. 
Asbtract ( 310 )   HTML ( 0)   PDF (446KB) ( 434 )  
Related Articles | Metrics
Peer to peer (P2P) technology has been widely used in file-sharing, distributed computing, e-market and information management. One of the fundamental challenges for P2P systems is the ability to manage risks involved in interacting and collaborating with prior unknown and potentially malicious parties. Reputation-based trust management systems can successfully mitigate this risk by deriving the trustworthiness of a certain peer from that peer's behavior history. However, in current trust models employed by the existing P2P systems, the validity of peers' trust valuation is seriously affected by peers' malicious behaviors. For example, there are many strategic cheating and dishonest recommendation. To solve this problem, a novel reputation-based trust model based on probability and statistics for P2P systems is proposed. Referring to subjective trust relationship of sociological theory, the proposed model uses experience-based and recommendation-based trust relationship to compute the trustworthiness of peers. In particular, this model introduces three parameters, namely, experience's time-sensitivity, referee's credibility and recommended information's reliability, and thus it can provide adequate reaction to peers' malicious behaviors. Theoretical analysis and simulation show that the proposed model has advantages in coping with peers' malicious behaviors over the existing P2P reputation systems. It is highly effective in countering malicious peers regarding strategic cheating, dishonest recommendation, and collusion.
Scheduling for Traffic Combination of Multi-Core Trace Data
Hu Xiao and Chen Shuming
2008, 45(3):  417-427. 
Asbtract ( 549 )   HTML ( 0)   PDF (673KB) ( 414 )  
Related Articles | Metrics
On-chip trace data contains run-time information of embedded multi-core processors for software debug. Trace data are transferred through special data path and output pins. Scheduling for combining the traffic of multi-source trace data is one of the key issues that affect performance of the on-chip trace system. Features of trace traffic combination,evaluation metrics and scheduling schemes are analyzed. A novel queue scheduling algorithm (TraceDo algorithm) with service required threshold (SRH) and minimum service granularity (MSG) is presented. Setting a SRH to each queue,queue switching is controlled by comparing the queue length with the threshold level of SRH. Users can control queue length distributions and overflow rates according to overflow costs, buffer capacities and burst characteristics of trace traffic. Using MSG and lazy switching together, the minimum number of consumers served in a queue between two switchovers is promised and such service granularity is increased when other queues have marginal capacity of buffer. Therefore switchover counts are reduced and overflow probabilities along with such gains are constrained by SRH. Simulation results show that the algorithm controls the overflow rate of each queue effectively and utilizes the buffer capacity according to the queues priority assigned sufficiently. The algorithm is realized in Verilog-HDL. Compared with a leading method, the overflow rate is reduced 30% with additional 2015μm\+2 in area.
Study of Nodal Movement and Distribution in a Terrain with Entrances
Gong Weibin, Chang Yilin, Shen Zhong, and Zhang Ying
2008, 45(3):  428-435. 
Asbtract ( 478 )   HTML ( 0)   PDF (459KB) ( 403 )  
Related Articles | Metrics
In simulation of ad hoc networks, the choice of underlying mobility models has a great influence on the network topology and performance of protocols. However, most of the existing mobility models that are designed for idealistic cases can not provide realistic movement scenarios. On the basis of RWP (Random Waypoint) mobility model, a novel realistic mobility model, RWP with entrances in the terrain (RWPWE), is proposed. In the proposed model, 1) There is at least one entrance on the edge of or in the terrain; 2) The movement pattern in a terrain can be described by RWP model; and 3) The node transfer from inside the terrain to outside through entrances is decided by the probability staying inside the terrain. The influence of entrances on the node mobility is characterized by the attributes of nodal velocity, transition time, transition length and spatial distribution in the terrain. Theoretical analysis and simulation show that due to the existence of terrain entrances the node mobility is more complex and nodal spatial distribution is greatly different from that of RWP model. Moreover, these differences have a significant impact on ad hoc network topology and the performance of protocols eventually.
A Long-Range Dependence Sliding Window Time-Varying Estimation Algorithm for Network Traffic
Wei Jinwu, Zhang Jin, and Wu Jiangxing
2008, 45(3):  436-442. 
Asbtract ( 474 )   HTML ( 0)   PDF (473KB) ( 408 )  
Related Articles | Metrics
Long-range dependence (LRD) of network traffic is revealed in a dynamical evolution way. Thus, quantifying the LRD characteristics is one of the vital problems to study network behavior. Traditional LRD estimators can not give the accurate estimation under some complex conditions after seven type traditional LRD estimators are comprehensively evaluated in this paper. The main reason is that the traditional methods introduce the smoothness to traffic series in some degreedues to doing average within global domain. Consequently, some important features of network traffic such as burstiness and LRD are destroyed. A sliding window time-varying Hurst (SWTV-H) exponent estimation algorithm for LRD characteristics is proposed to improve the Hurst exponent estimating performance based on the concept of time-varying Hurst exponent induced. The SWTV-H algorithm can estimate the local Hurst exponent in some resolution ratio level, and provide a dynamic estimation of LRD trend of global behavior by shifting the local domain. The effectiveness of the SWTV-H algorithm is validated by the data of the artificial fractal Gaussian noise (fGn) series and the actual network traffic series. The results indicate that the SWTV-H algorithm is more accurate and reliable to estimate LRD characteristics compared with the traditional methods, and it has robust performance.
Measurement and Analysis for Performance of Mobile IPv6 Handover and its Bottleneck
Chen Ji, Zheng Hongxia, and Xie Gaogang
2008, 45(3):  443-453. 
Asbtract ( 431 )   HTML ( 1)   PDF (671KB) ( 331 )  
Related Articles | Metrics
Mobile IPv6 is a protocol proposed by IETF which allows mobile nodes to remain reachable while moving around in the IPv6 Internet. Handover is a key factor that influences Mobile IPv6 performance. Handover latency is too long for real-time application such as VoIP and causes performance degradation of current communication. A network layer handover procedure can be divided into four phases: movement detection, address configuration, home agent registration and correspondent node registration. The impact of different protocol layers and bottleneck of performance in all procedures during handover of MIPv6 is analyzed in this paper based on the experiments of performance measurement. A performance measurement method is deployed to accurately parse delay times in each handover procedure and the modified method of measuring the movement detection delay and the suggestion of reducing the delay in each phase of handover procedure are proposed. By measuring the TCP sliding window and the performance of upper layer applications such as FTP application during handover, the effect of TCP features and performance of upper layer applications in handover procedure are also analyzed. All of these results could provide valuable guideline for improving the performance of mobile handover and designing an effective mobile protocol.
An Overview of the Research on Coevolutionary Algorithms
Dong Hongbin, Huang Houkuan, Yin Guisheng, and He Jun
2008, 45(3):  454-463. 
Asbtract ( 682 )   HTML ( 0)   PDF (402KB) ( 1344 )  
Related Articles | Metrics
Evolutionary algorithms often suffer from premature convergence because of the loss of population diversity at the early stage. Coevolutionary algorithm is a hot research topic in computational intelligence, which aims at improving conventional evolutionary algorithms. Inspired by the principle of natural selection, coevolutionary algorithms are search methods in which processes of mutual adaptation occur amongst agents that interact strategically. The outcomes of interaction reveal a reward structure that guides evolution towards the discovery of increasingly adaptive behaviors. Much of the work on coevolutionary algorithms has focused on two kinds of interaction: competitive coevolutionary systems and cooperative coevolutionary systems. Competitive coevolutionary algorithms are natural models for evolving objects such as game playing programs for which it is difficult to write an external fitness function, but quite simple to define fitness in terms of competitive success against other programs in the evolving population. Cooperative coevolutionary algorithms are natural models for evolving complex objects by decomposing them into subassemblies that coevolve, and subassembly fitness is determined by how well it works with the other subassemblies in producing a complete object. The research state and advances in the coevolutionary algorithms are discussed and surveyed. The implementation techniques and main applications of the coevolutionary algorithms are outlined. Further research directions are indicated.
Self-Organized Particle Swarm Optimization Based on Feedback Control of Diversity
Jie Jing, Zeng Jianchao, and Han Chongzhao
2008, 45(3):  464-471. 
Asbtract ( 453 )   HTML ( 1)   PDF (460KB) ( 583 )  
Related Articles | Metrics
Particle swarm optimization (PSO) is a novel swarm intelligence algorithm inspired by certain social behavior of bird flocking originally. Since proposed in 1995, the algorithm proved to be a valid optimization technique and has been applied in many areas successfully. However, like others evolutionary algorithms, PSO also suffers from the premature convergence problem, especially for the large scale and complex problems. In order to alleviate the premature convergence problem, the paper develops a self-organized PSO(SOPSO). SOPSO regards the swarm as a self-organized system, and introduces negative feedback mechanism to imitate the information interaction between the particles and the swarm background. Considering swarm diversity is a key factor influencing the global performances of PSO, SOPSO adopts swarm diversity as main dynamic information to control the tuning of parameters through feedback, which in turn can modify the particles to diverge or converge adaptively and contribute to a successful global search. The proposed methods are applied to some complex function optimizations and compared with the other notable improved PSO. Simulation results show SOPSO based on feedback control of swarm diversity is a feasible technique, which can alleviate the premature convergence validly and improve the global performances of PSO in solving the complex functions.
An Active Learning Algorithm by Selecting the Most Possibly Wrong-Predicted Instances
Long Jun, Yin Jianping, Zhu En, and Cai Zhiping
2008, 45(3):  472-478. 
Asbtract ( 630 )   HTML ( 1)   PDF (357KB) ( 593 )  
Related Articles | Metrics
Active learning methods can alleviate the efforts of labeling large amounts of instances by selecting and asking experts to label only the most informative examples. Sampling is a key factor influencing the performance of active learning. Currently, the leading methods of sampling generally choose the instance or instances that can reduce the version space by half. However, the strategy of halving the version space assumes each hypothesis in version space has equal probability to be the target function which can not be satisfied in real world problems. In this paper, the limitation of the strategy of halving the version space is analyzed. Then presented is a sampling method named MPWPS (the most possibly wrong-predicted sampling) aiming to reduce the version space more than half. While sampling, MPWPS chooses the instance or instances that would be most likely to be predicted wrong by the current classifier, so that more than half of hypotheses in the version space are eliminated. Comparing the proposed MPWPS method and the existing active learning methods, when the classifiers achieve the same accuracy, the former method will sample fewer times than the latter ones. The experiments show that the MPWPS method samples fewer instances than traditional sampling methods on most datasets when obtaining the same target accuracy.
Study on GPGP-Cooperation-Mechanism-Based Multi-Agent Job Shop Scheduling Method
Ma Xin and Liang Yanchun
2008, 45(3):  479-486. 
Asbtract ( 274 )   HTML ( 1)   PDF (556KB) ( 578 )  
Related Articles | Metrics
As an important part of the job shop manufacturing system, the job shop scheduling problem (JSSP) affects the agility and intelligence of the whole enterprise. In the real scenarios, the resource restriction coexists with the process restriction, which makes JSSP become NP-hard. Therefore, there is not yet an applicable method for solving the JSSP. In this paper, a job shop scheduling model combining MAS (multi-agent system) with multi-intelligence algorithms is presented. The proposed model is based on the generalized partial global planning (GPGP) mechanism and utilizes the advantages of static intelligence algorithms with dynamic MAS. A scheduling process from “initialized macro-scheduling” to “repeated micro-scheduling” is designed for large-scale complex problems to enable to implement an effective and widely applicable prototype system for JSSP. Under this scheme, a set of theoretic strategies in the GPGP are summarized in detail. A two-stage multi-objective optimization scheduling is performed and the GPGP-cooperation-mechanism is simulated by using simulation software DECAF for the JSSP. Meanwhile, those simulation results are compared with CNP-cooperation-mechanism and NONE mechanism. The results show that the proposed model based on the GPGP-cooperation-mechanism not only improves the effectiveness, but also reduces the resource cost.
Approximate Computation of Multi-Agent Dynamic Influence Diagrams
Yao Hongliang, Wang Hao, Wang Ronggui, and Li Junzhao
2008, 45(3):  487-495. 
Asbtract ( 391 )   HTML ( 0)   PDF (578KB) ( 491 )  
Related Articles | Metrics
Due to high dimension and uncertainty of the complex system, the complexity system is often hard to represent and process, and the knowledge representation and computation methods of complex systems are open hard problems in complex system research. At present, MAIDs can not model dynamic environment and it is difficult for multi-agent MDPs to represent structural relations among agents; so a multi-agent dynamic influence diagrams (MADIDs) model is given to representation relations among multi-agents in dynamic environment by local factor probability form. The computation of joint probability distribution and joint utility function of MADIDs are a high dimension problem, so the approximate computation methods are researched. A distribution approximation method of hierarchical decomposition of probability structural MADIDs is studied; based on analysis of the complexity and the error of the distribution approximation method, a function δ(k) is introduced to establish equilibrium between precision and complexity of approximate distribution. Then a BP neural network is given to approximately compute utility structural MADIDs by learning local utility. Finally, given model instances, the experiment results show the validity of the approximation computation method of the MADIDs model.
Semi-Supervised Robust On-Line Clustering Algorithm
Jin Jun and Zhang Daoqiang
2008, 45(3):  496-502. 
Asbtract ( 386 )   HTML ( 0)   PDF (424KB) ( 483 )  
Related Articles | Metrics
Recently, a semi-supervised learning has attracted much attention in machine learning community. One reason is that in many learning tasks, there is a large supply of unlabeled data but insufficient labeled data because the latter is much more expensive to obtain than the former. Typically, semi-supervised learning is applicable to both clustering and classification. This paper focuses its attention on semi-supervised clustering. In semi-supervised clustering, some label level or instance level supervised information is used along with the unlabeled data in order to obtain a better clustering result. A semi-supervised robust on-line clustering algorithm called Semi-ROC is developed, which introduces supervision information in the form of class labels into the previously proposed robust on-line clustering (ROC). After introducing the supervised information, the algorithm can get a more confidential result than the ROC and AddC. The experimental results on the artificial dataset and UCI benchmark data sets show that the proposed Semi-ROC can effectively use little supervision information to enhance the clustering performance, the clustering validity can be improved significantly. Besides, when dealing with noises, Semi-ROC is more robust than both ROC and AddC.
A Physically-Based Detail-Preserving Algorithm for Real-Time Deformation
Che Yinghui, Liang Xiaohui, and Zhao Qinping
2008, 45(3):  503-509. 
Asbtract ( 454 )   HTML ( 0)   PDF (491KB) ( 410 )  
Related Articles | Metrics
Real-time deformation is one of the hot research issues in computer graphics and it is widely used in video games, virtual surgery and virtual reality. However, physically-based deformation can be computationally expensive, so the problem of real-time deformation for complex objects is yet to be solved. In this paper, a new method of real-time deformation is proposed, which integrates the advantages of physically-based deformation and multi-resolution mesh editing and works well with complex elastic objects. In the preprocessing stage, the algorithm simplifies the original detail mesh to generate a base mesh representation adaptively, and then encodes position and normal of the vertices of the detail mesh based on the base mesh. In the real-time rendering phase, the physically-based deformation is performed on the base mesh, and with the deformed base mesh and the encoding geometry details, the deformed detail mesh can be easily reconstructed. In order to improve the rendering speed, the rendering stage sufficiently utilizes the parallelism ability of graphics hardware and most calculation is executed by the fragment program. Experiments show that the method preserves the local details relatively well during the deformation procedure and it is suitable for the applications of real-time deformation of objects with complex surface details.
An AVS HDTV Video Decoder Architecture Based on HW/SW Partitioning
Jia Huizhu, Xie Xiaodong, and Gao Wen,
2008, 45(3):  510-518. 
Asbtract ( 454 )   HTML ( 0)   PDF (513KB) ( 546 )  
Related Articles | Metrics
Video decoder architecture is developing from dedicated hardware to HW/SW partition, this owes to the powerful process of hardware and the flexibility and programmability of software. The increasing complexity of AVS algorithms has raised the challenge for HW/SW partitioning architecture. In this paper, AVS algorithms and complexity are first analyzed. Based on the analysis, an optimized real-time AVS (a Chinese next-generation audio/video coding standard) HDTV video decoder with HW/SW portioning is proposed. It can support real-time AVS JiZhun profile 6.0 level video decoding, and also support flexible A/V synchronization, error recovery, buffer management and system control mechanism. A hardware implementation of the MB level 7-stage synchronized pipeline is selected. The software tasks are realized with a 32-bit RISC processor. Under the control of the RSIC processor, the design can support AVS video syntax extension and revision in the future. The AVS decoder (RISC processor and hardware accelerators) is described in high-level Verilog/VHDL hardware description language and implemented in a single-chip AVS HDTV real-time decoder. At 148.5MHz working frequency, the decoder chip can support real-time decoding of NTSC, PAL or HDTV (720p@60 f/s or 1080i@60 field/s) bit-streams. Finally, the decoder chip has been physically implemented in AVS101 on a 6-metal 0.18μm SMIC CMOS technology and fully tested on a prototyping board.
Research on Directional Penetration Depth Algorithm in Collision Response
Liu Li, Wang Zhaoqi, Xia Shihong, and Li Chunpeng,
2008, 45(3):  519-526. 
Asbtract ( 690 )   HTML ( 0)   PDF (549KB) ( 439 )  
Related Articles | Metrics
Directional penetration depth (DPD) is defined as minimum distance by which one polyhedron translates along the given direction and makes the interiors of the two overlapped polyhedrons disjoint. Since the penetration depth computation will influence over the research of collision response, thus it possesses great theory research significance and application value in promoting the naturalness of virtue reality. However the current existing methods can not handle the computing speed, computing accuracy and algorithm generality at the same time. These shortcomings limit severely the practicability of the methods. Facing this problem, a fast directional penetration depth algorithm is presented. First, based on the penetration point sets, a method is developed to calculate the DPD between any two polyhedrons using the technique of intersection test. Then presented are a series of optimization strategies, including the method to construct bounding volume hierarchy and the method to reduce the amount of points and triangle meshes need to be detected in the processing. Under the conditions for computational accuracy, these strategies can strongly improve the performance of the algorithm. Experimental results show that this algorithm can fast and accurately calculate the DPD between the two general complex polyhedrons, even in the environment of the two complex polyhedrons contain tens of thousand triangle meshes, and have multiple contacts. Moreover, compared with other algorithms, this algorithms can handle some special penetration problems, and thus it has more generality.
Motion String: A Motion Capture Data Representation for Behavior Segmentation
Yang Yuedong, Wang Lili, and Hao Aimin
2008, 45(3):  527-534. 
Asbtract ( 514 )   HTML ( 0)   PDF (490KB) ( 535 )  
Related Articles | Metrics
Currently, motion data are often stored in small clips for being used in animations and games. So the behavior segmentation of motion data is a key problem in the process of motion capture. In order to segment the motion data into small clips, a new symbolic representation of motion capture data is introduced and a behavior segmentation approach based on the representation is explored. The high dimensional motion capture data are first mapped on a low dimensional space, based on spectral clustering and sliding-window distance extending weighted quaternion distances. Then the low dimensional data can be represented by a character string, called motion string (MS), and by temporal reverting and max filtering. Because MS converts motion data into a character string, lots of string analysis methods can be used for motion processing. In addition to motion segmentation, motion string may be widely applied in various other areas such as motion retrieval and motion compression. Suffix trees are used to segment the moion data by extracting all static substrings and periodic substrings from MS. Each substring represents a behavior segment, and the motion data are segmented into distinct behavior segments by annotating these substrings. In the experiments, MS is proved to be a powerful concept for motion segmentation, providing the good performance.
Combined Multiple Classifiers Based on TBL Algorithm and Their Application in Question Classification
Li Xin, Huang Xuanjing, and Wu Lide
2008, 45(3):  535-541. 
Asbtract ( 423 )   HTML ( 0)   PDF (422KB) ( 502 )  
Related Articles | Metrics
As a very active branch of natural language processing, open-domain question answering (QA) system has been attached increasing attention to, for it can understand the question in natural language, and thus provide its users with compact and exact results. Question classification (QC), i.e., putting the questions into several semantic categories, is very important for a question answering system and directly affects the performance of the QA system in selecting correct answers. Its main task is to understand the demand of users. In this paper, to investigate automatic question classification, different classification features, such as Bag-of-words, Bi-gram, synset from Wordnet and dependency structure from Minipar, are compared. Support vector machine (SVM) and such machine learning ensemble approaches as transformation-based error-driven learning (TBL), vote and back propagation artificial neural network (BP) are experimented on. Compared with single-feature SVM, multi-feature SVM classifiers and BP, vote ensemble learning means, and the question classification algorithm are presented in this paper. The method, by using combined multiple SVM-classifiers based on a TBL algorithm and with linguistic knowledge like synset from Wordnet and dependency structure from Minipar as question representations, is proved to be more accurate in open question classification corpus. And using dependency structure, a 1.8% improvement over the no use of it is achieved.
A Hierarchical Search Result Clustering Method
Zhang Gang, Liu Yue, Guo Jiafeng, and Cheng Xueqi
2008, 45(3):  542-547. 
Asbtract ( 555 )   HTML ( 0)   PDF (357KB) ( 544 )  
Related Articles | Metrics
Search result clustering can help users quickly browse through the documents returned by search engine. Traditional clustering techniques are inadequate since they can not generate clusters with highly readable names. In order to improve the performance of the search result clustering and help user to quickly locate the relevant document, a label-based clustering method is used to make the search result clustering. A multi-feature integrated model is developed to extract base-cluster labels, which combines the DF, query log and query context features together. Using the extracted labels, some basic clusters are built. In order to setup a hierarchical clustering structure, a basic cluster relation graph is built based on these basic clusters. A hierarchical cluster structure is generated from the basic cluster relation graph using the graph based cluster algorithm (GBCA). To evaluate the search result clustering method, a test-bed is set up. P@N and F-Measure are introduced to evaluate the extracted labels and the document distribution in clusters. The experiment shows that the integrated label-extraction model is very effective. The more feature is used, the higher P@N can be gained. Compared with the STC and Snaket clustering method, GBCA outperforms the STC and Snaket in cluster label extraction and F-Measure.
Resource Sharing in Continuous Extreme Values Monitoring on Sliding Windows
Tian Li, Wang Le, Li Aiping, Zou Peng, and Jia Yan
2008, 45(3):  548-556. 
Asbtract ( 324 )   HTML ( 1)   PDF (570KB) ( 353 )  
Related Articles | Metrics
The problem of resource sharing in continuous extreme values monitoring (MAX or MIN) over sliding windows is considered. Firstly, an effective pruning technique called key points (KP) is developed to minimize the number of elements to be kept for all queries. It can be shown that on average the cardinality of KP satisfies M=O(logN), where N is the number of points contained in the widest window. Then an efficient algorithm called MCEQP (multi-continuous extreme queries processing algorithm) is proposed for continuously monitoring K queries with different sliding window widths. The main idea of MCEQP is to handle queries collectively by exploiting similarities and sharing resources such as computation and memory, which is more efficient than handling them separetely. The linklist-implemented instance of MCEQP can update all K results in O(M+K) time when a new tuple arrives, where M is the cardinality of KP. A trigger based technique is provided to avoid frequent but unnecessary process for data expirations, and only on some special time instances, O(K) time is needed to update K results. The dynamic registration and removal of queries are also supported by MCEQP in O(M+K) time. Theoretical analysis and experimental evidences show the efficiency of the proposed approach both on storage reduction and efficiency improvement.
Secure and Efficient Protocols for Watermark Verification
Xu Wenli, Yu Yeyun, and Wang Yumin
2008, 45(3):  557-562. 
Asbtract ( 428 )   HTML ( 0)   PDF (348KB) ( 396 )  
Related Articles | Metrics
A proved secure protocol for watermark verification based on perfect zero knowledge interactive proof system and bit commitments scheme is proposed. Most of the existing literature use, Cox's spread spectrum watermarking scheme or its or its modified ones. In this paper, the multiplication embedding rule is adopted, which is more robust and fits in with copyright protection. In the process of watermarking extraction, under the condition of taking into account, the different transform domains (such as DWT, DCT and DFT), communication channel performance and human visual system, generalized Gaussian distribution model and Weibull distribution model based robust optimum watermarking detectors of different transform domains are adopted. Watermarking information is committed by using bit commitments scheme, and by jointing random sequence, the watermarking embedding position information is hidden. The proposed protocol ensures that a prover soundly convinces a verifier of the presence of a watermark in certain stego-data without revealing any knowledge (including watermarking itself, embedding position, watermarking extraction keys) about the watermark which can be used by a potential attacker to modify, remove or spoil the watermarks. Moreover, various watermarking detectors in different transform domains are adopted. Thus, by using the presented simple and secure protocol, the security and efficiency of watermarks verification of the different watermarking detection schemes can be improved greatly.
One-Way Hash Function based on Integer Coupled Tent Maps and Its Performance Analysis
Liu Jiandong
2008, 45(3):  563-569. 
Asbtract ( 450 )   HTML ( 0)   PDF (425KB) ( 509 )  
Related Articles | Metrics
A one-way hashing algorithm is a deterministic algorithm that compresses an arbitrary long message into a value of specified length. The output value represents the fingerprint or digest of the message. In this paper, A novel integer coupled tent maps-based cryptographic one-way hash function is put forward and an analysis on its characteristics is carried out. In the algorithm, the two-way coupled map lattice model is adopted, and the traditional logic functions are replaced with the integer tent map, so that it has ideal diffusion and confusion properties. The Hash value of 160 bits is obtained from a message with arbitrary length by means of the nonlinear transform on the coupled iteration integer sequences. Aanalysis and simulation results indicate that the algorithm has good one-way and collision resistant performances,which satisfy all kinds of functional requirements of one-way Hash. The Hash function all adopts simple integer operation, so that it could be easily realized both in software and hardwareand high implementation efficiency could be realized. The integer coupled tent maps-based Hash function serves as one of the most highly competitive candidates for practical applications of Hash function and secure information communications in computer networks.