Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 April 2009, Volume 46 Issue 4
Paper
A P2P Framework for Supporting Multi-Dimensional Range Query
Zhang Rong, Qian Weining, and Zhou Aoying
2009, 46(4):  529-540. 
Asbtract ( 530 )   HTML ( 0)   PDF (2042KB) ( 629 )  
Related Articles | Metrics
How to efficiently support multi-dimensional range search is one of the research hotspots in the traditional data management area. The design and implementation of multi-dimensional range query processing in large-scale distributed systems, however, remains to be a great challenge. VBI-tree is a peer-to-peer indexing framework based on a balanced tree structure overlay and it can support any kind of multi-dimensional hierarchical tree structures such as R-tree, X-tree, and M-tree to be implemented in peer-to-peer computing environment. VBI-tree designs the search algorithms which can start from any position or any node instead of the root node used in the centralized hierarchical tree structures, thus successfully avoiding the performance bottleneck problem introduced by the root node. Specifically, in a network with N nodes, it guarantees that queries can be answered within O(logN) hops. It takes network restructuring based on AVL-tree rotation method as the load balancing strategy, which can balance work load efficiently. Additionally, a succinct structure of VBI\+*-tree is provided by setting up special ancestor-descendant links when facing a large number of data operations, which can improve the indexing performance. By using such new links, it is ensured that the related area checking to the queries will happen among the nodes of the same level to the greatest extent instead of sending checking requests directly to high level nodes, thereby reducing the load of high level nodes and also system updating cost. Experimental results validate the efficiency and effectiveness of the proposed approach.
Study on Optimal Packet Dispersion Strategy
Zhou Anfu, Liu Min, and Li Zhongcheng
2009, 46(4):  541-548. 
Asbtract ( 569 )   HTML ( 3)   PDF (1084KB) ( 875 )  
Related Articles | Metrics
During the migration to next generation network, an important trend is that IP-based network will become the main vehicle for carrying audio and video applications. VoIP is a representative of such applications. However, packet loss is a main factor that degrades QoS of VoIP, and bursty loss jeopardizes QoS of VoIP much more than scattered loss. Packet dispersion is a mechanism that disperses packets between parallel paths following some dispersion strategies. It transforms bursty packet loss to scattered packet loss so as to improve QoS of VoIP. Although there are researches focusing on the use of packet dispersion in VoIP transmitting, the theoretic foundation of packet dispersion, such as to which extent packet dispersion can improve QoS and what is the optimal dispersion strategy, is unknown. In this work, concentrating on these problems, the quantitative relation between QoS of VoIP and the packet dispersion strategy is formulated as constrained optimization problems under Bernoulli packet loss model. Based on the closed form solution of the optimization problem, an algorithm is presented to find the optimal packet dispersion strategy. The proposed optimal dispersion strategy is verified under Gilbert packet loss model with ns-2 simulation.
A Reasoning-Oriented Context Replacement Algorithm in Pervasive Computing
Lin Xin, Li Shanping, Yang Zhaohui, and Xu Jian
2009, 46(4):  549-557. 
Asbtract ( 504 )   HTML ( 0)   PDF (1256KB) ( 536 )  
Related Articles | Metrics
Using context cache is an effective way to reduce the overhead of context access, decrease the amount of information transmission and improve the application availability when disconnections occur. A reasoning-oriented context replacement algorithm (CORA) is presented, which aims at promoting the hit rate of context cache and reducing the overhead of context transmission. CORA adopts context state space as a general model for diverse context reasoning methods used to infer high-level contexts from low-level contexts. CORA is composed of two parts: 1) at the cache end, to promote hit rate, it computes the estimated access probability and invalidation time of low-level contexts to get their caching values as the criterion for cache replacement; 2) at the sensor end, to maintain data consistency, an error bound is set to trigger a proactive update when the sensor reading is out of the bound. Simulation experiments are conducted to compare CORA with the classical cache replacement algorithm LRU (least recently used). Their hit rates are compared by changing the cache size, unevenness of contexts access probabilities and update-access ratio. The results demonstrate that CORA can get higher hit rate than LRU when the cache size is small, contexts access probabilities are uneven, and update-access ratio is high, which means that CORA is more suitable for pervasive computing environment.
Lower Bounds on Lifetime of Clustering Extended Ultra Wide Band Sensor Network
Xu Juan, Hong Yongfa, and Jiang Changjun,
2009, 46(4):  558-565. 
Asbtract ( 454 )   HTML ( 0)   PDF (954KB) ( 470 )  
Related Articles | Metrics
A fundamental challenge for wireless sensor networks lies in energy constraint, hence the network lifetime becomes a critical concern in the design of wireless sensor networks under energy constraint. Considering time hopping impulse radio ultra wide band(TH-IR UWB) sensor networks with n sensor nodes randomly located on a disk of area S\-n=λn and a base station, the asymptotic lower bounds on the lifetime of complete clustering and ordinary clustering extended TH-IR UWB sensor networks are derived respectively. The results indicate that the lower bound on the lifetime of complete clustering static extended network is λn/(log(λn))\+2 times longer than that of ordinary clustering static network, and that the lower bound on the lifetime of complete clustering extended network in the ideal case is (λn)=\+{1/2}/(log(λn))\+{3/2} times longer than that of ordinary clustering network in the ideal case, and thus complete clustering can significantly improve the network lifetime. The results also reveal that for ordinary clustering extended TH-IR UWB sensor network the lower bound on the lifetime in the ideal case is longer than that of static network by a factor of(λn/log(λn))\+{1/2}, therefore, sensor nodes or base station moving randomly in the deployment area can improve the lifetime of ordinary clustering sensor network. The formulas for lower bounds also reveal that the network lifetime is in inverse proportion to the node number n or the size of deployment area, and thus the large-scale extended TH-IR UWB sensor network is not practical.
An Improved APIT Node Self-Localization Algorithm in WSN Based on Triangle-Center Scan
Zhou Yong, Xia Shixiong, Ding Shifei, Zhang Lei, and Ao Xin
2009, 46(4):  566-574. 
Asbtract ( 637 )   HTML ( 0)   PDF (2117KB) ( 493 )  
Related Articles | Metrics
Node self-localization is one of the important research topics in WSN. APIT is a major range-free localization algorithm. Compared with other range-free algorithms, APIT can achieve higher precision position estimation with small communication cost. However, APIT requires high anchor node density. Besides, in the process of APIT test, boundary effect and low neighbor node density can easily increase InToOut error and OutToIn error. Otherwise, the grid scan algorithm is inefficient and has a lower fault-tolerance to OutToIn error. In allusion to the problems mentioned above, an improved APIT algorithm based on triangle-center scan is proposed. Firstly, the reason for InToOut error and OutToIn error is analyzed and two improvements of APIT are introduced. Then, the effect of grid scan algorithm on the precision of position estimation and the algorithms efficiency are analyzed, and a triangle-center scan algorithm is presented. Finally, simulation results show that the improved algorithm not only can reduce the IntoOut error and OutToIn error effectively and improve the precision of position estimation, but also has a higher fault-tolerance to OutToIn error and can enhance the algorithms efficiency.
An Energy-Hole Avoidance Routing Algorithm for WSN Based on PSO
Liu Anfeng, Wu Xianyou, and Chen Zhigang
2009, 46(4):  575-582. 
Asbtract ( 580 )   HTML ( 0)   PDF (1542KB) ( 539 )  
Related Articles | Metrics
In multi-hop wireless sensor networks characterized by many-to-one traffic patterns, problems related to energy imbalance among sensors often appear. For example, the amount of traffic that sensors are required to forward increases dramatically as the distance to the sink node becomes smaller. Thus, sensors closest to the sink node tend to die early, leaving areas of the network completely unmonitored and causing network partitions. Hence, an important issue of wireless sensor networks routing is how to mitigate the energy-hole problem. Based on the characteristics of wireless sensor networks, a routing problem is converted firstly into linear programming problem, and the equivalence between the routing problem and linear programming problem is proved. On the basis of the above, the particle swarm optimization algorithm (PSO) is used for solving the routing problem of avoiding the energy-hole. The algorithm redefines the particle of the PSO, the operation of particle, and the “flying” rules. Then it turns into a routing optimization algorithm for WSN based on PSO. The algorithm can be applicable to the flat network, while being applicable to the hierarchical network if improved in some sort. The significant advantage of the algorithm is that it could provide the general routing optimization approach for energy balance, regardless of the topology structure of network. Finally, the accuracy and effectiveness of the algorithm are proved respectively by theoretical analysis and a number of simulated experiments.
Research on Hash-Based RFID Security Authentication Protocol
Ding Zhenhua, Li Jintao, and Feng Bo,
2009, 46(4):  583-592. 
Asbtract ( 529 )   HTML ( 0)   PDF (1099KB) ( 590 )  
Related Articles | Metrics
Radio frequency identification (RFID) is a technique using radio frequency for object identification. It is regarded as one of the ten most important technologies of this century due to its celerity, real-time, veracity in collecting and processing information through unique identification. RFID can be widely used in manufacture, retail, logistics, transportation, medical treatment, national defence, etc. However, wireless transmission, broadcast of signals, resource-constraint, etc. bring some potential risks, which disturb the reliability of RFID system and block the deployment progress of RFID techniques. To prevent the security threats, based on the analysis of the security problem, two concepts of operation mode, the single session mode and the successive session mode, are proposed; and a Hash-based Security Authentication Protocol (HSAP) between tags and the back-end server for low-cost RFID system is designed. This protocol can prevent many security problems including spoofing attack, replay attack, tracking, as well as the problem of desynchronization. The formal proof of correctness of the proposed authentication protocol is given based on GNY logic. As only hash function and bitwise OR operation are required to be computed by tags, so the proposed strategy is very suitable for low-cost RFID system compared with previous works.
A Two-Dimensional Parity-Based Concurrent Error Detection Method for AES Against Differential Fault Attack and Its VLSI Implementation
Zhao Jia, Han Jun, Zeng Xiaoyang, and Han Lin
2009, 46(4):  593-601. 
Asbtract ( 617 )   HTML ( 1)   PDF (920KB) ( 515 )  
Related Articles | Metrics
A two-dimensional parity-based concurrent error detection method for AES algorithm against differential fault attack is proposed. It combines two traditional one dimension parity check methods together to check errors in two directions. A simulation has been conducted to compare the fault coverage of the method in this paper with that of traditional parity-based CED methods. Compared with previous parity-based CED methods, this scheme has a more optimized configuration in choosing the number and position of the parity-check bits. This approach is able to detect odd-bits errors in both horizontal and vertical direction. Therefore it has much higher coverage over multi-bits errors while keeping 100% coverage over odd-bits errors. Since all of the parity calculation modules can be used for both horizontal and vertical parity computation, a pipelined structure is adopted and all parity calculation modules are reusable in both horizontal and vertical parity calculations. The hardware cost of this two-dimensional parity-based CED method is 18%(maximal) higher than that of the traditional methods, whereas the critical path and throughput of this approach remain the same as that of traditional ways. This is a novel CED method for AES algorithm against differential fault attack because of its high efficiency and low cost.
An Intrusion Detection Model Based on Mining Multi-Dimension Data Streams
Mao Guojun and Zong Dongjun
2009, 46(4):  602-609. 
Asbtract ( 459 )   HTML ( 0)   PDF (1324KB) ( 869 )  
Related Articles | Metrics
Network data are always high-speed and unlimited. Typical data mining methods, which always do multi-scanning to databases, do not fit in with constructing intrusion detection model for high-speed network data streams. Proposed in this paper is a new intrusion detection model based on mining multi-dimension data streams. It combines anomaly detection mechanisms with misuse detection techniques, and thus it can mine new attack types as well as anomaly detection techniques do, and has a high detection efficiency like the misuse detection mechanism. In fact, a network access data stream has a complex structure, that is, an accessing behavior always needs a lot of attributes to express, and so analyzing a network access data stream is a hard work. Through using the multi-frequency technique, this paper solves the problems of pattern expression and generation for network access data streams. A new data structure called MaxFP-Tree is proposed, and a new algorithm called MaxFPinNDS to mime frequent patterns from data streams is designed. Due to using damped window techniques, the algorithm MaxFPinNDS can efficiently and effectively find out maximal frequent itemsets in recent period of a data stream. The experiment results show that the proposed algorithms and models are very effective to intrusion detection on network.
Human Animation Synthesis Based on Primitive Movement
Zhu Dengming, and Wang Zhaoqi
2009, 46(4):  610-617. 
Asbtract ( 457 )   HTML ( 0)   PDF (1512KB) ( 592 )  
Related Articles | Metrics
Synthesizing high-quality human animations from the motion capture data is an important technology. The cost for the motion capture system is quite high, and the motion data cleaning is also an exhausting work. Usually, the existing motion capture data is a long motion sequence. Therefore, in many practical applications, it is difficult to create new animations from the long motion sequence directly. So it is a hot topic to extract the primitive movement from the existing motion capture data for synthesizing new animations. Many existing methods seldom consider the time sequence of motion data and the correlation among the joints. In this paper, a new technology is proposed to extract the primitive movement for synthesizing new animations. Firstly, PCA is adopted to map the high-dimensional motion data into a low-dimensional space, and the squared Mahalanobis distance is used to measure the similarity between different poses. Secondly, dynamic time warping and the sum of mean squared error are combined to segment and label the motion capture data automatically. Finally, a probability transfer model is proposed to construct the motion graph, which can be easily used for synthesizing new animations based on constraints.
A Fast 2D 8×8 DCT Algorithm Based on Look-Up Table for Image Compression
Ji Xiuhua, Zhang Caiming, and Liu Hui
2009, 46(4):  618-628. 
Asbtract ( 590 )   HTML ( 1)   PDF (1012KB) ( 644 )  
Related Articles | Metrics
Discrete cosine transform (DCT) has been adopted as an essential part of the well-known image/video compression standards. It is a key factor that DCT can be implemented as fast as possible because of its large number of arithmetic operations in the real-time image transmission. A direct 8×8 2D DCT fast algorithm using look-up table (LUT) is introduced in this paper. The new algorithm is based on the conception and symmetry of basic images. With the new algorithm, the number of addition operations for the transform is reduced while multiplying operations for the transform are eliminated. By designing skillfully the structure of look-up table, one can get a group of product data per memory access, so that the number of looking up the table is reduced greatly. By using the symmetry of basic images and studying the ranges of data in computing the transform, the size of look-up table is decreased. As the new algorithm is executed without involving any multiplication, it is attractive in digital image applications for portable devices, where silicon area and power consumption are dominant issues in hardware design. Moreover, the new algorithm can be parallelized easily. In low bit-rate image compression, only the low-frequency DCT coefficients are computed, which will be encoded by adopting the new algorithm so as to reduce the arithmetic operations greatly.
A New Location Algorithm of Knuckleprint Based on Wavelet Multi-Resolution Analysis
Mao Xianguang, Lai Xiaozheng, Lai Shengli, and Dai Hongyue
2009, 46(4):  629-636. 
Asbtract ( 508 )   HTML ( 0)   PDF (1850KB) ( 504 )  
Related Articles | Metrics
Being a new biometrical characteristic, knuckleprint has drawn considerable attention in personal identification. But owing to its defects such as weak resistance to noise, easy affection by hand-shape, etc., the location of region of interest (ROI) of knuckleprint is more difficult than other biometrical characteristics of palm. In order to solve the problem of accurate location of ROI of knuckleprint, by defining ROI of knuckleprint as the minimal region containing the total information of knuckleprint, a new automatic detection and location algorithm is brought forward based on wavelet multi-resolution and Radon projection to locate the ROI of knuckleprint accurately. Based on the texture similar theory, this algorithm divides knuckleprint image into multi-dimension images which include several high frequency sub-images and low frequency sub-images by wavelet multi-resolution, and then uses feature vector and regional growth to produce candidate sub-region set in high frequency sub-images. After that, the algorithm utilizes Radon projection in low frequency sub-images to verify the candidate sub-region set obtained from high frequency sub-images. Finally, by adopting straight-line fitting technique, the location of ROI of knuckleprint in original image is accurately located. Emulation experiment shows that this algorithm not only can get rid of noise and hand-shape affection but also can keep robust in different situations.
Optimization of Two Dimensional Test Data Compression
Zhou Bin, Wu Xinchun, and Ye Yizheng
2009, 46(4):  637-643. 
Asbtract ( 545 )   HTML ( 0)   PDF (1272KB) ( 422 )  
Related Articles | Metrics
In order to reduce the storage volume of test data during the built-in self-test (BIST), based on the two dimensional test data compression BIST scheme which combines input reduction (horizontal compression) and TRC test set embedding (vertical compression), the improved input reduction algorithm and TRC seed selection algorithm based on compatibility judging are utilized simultaneously to optimize the horizontal and vertical compression. The optimization includes the influence of percentage of specified bits (PSB) on the vertical compression under the same percentage of compatibility (PC), and the influence of PC on the vertical compression under the same PSB. Experimental results for ISCAS89 benchmark circuits show that there is always a range of PC [PC\-{low_limit},PC\-{high_limit}] which makes the test storage minimal for each PSB, and the relationships between PSB and PC\-{low_limit} and PC\-{high_limit} are linear. The proposed two dimensional compression scheme requires 20%~75% less test storage compared with the previous test data compression schemes, and it is indeed possible to embed the entire precomputed test set in TRC sequence. Furthermore, the test control logic is simple, uniform for all circuits, and can be shared among multiple CUTs. Finally, the proposed approach requires no mapping logic gates between the test generator circuit and the CUT; hence it imposes no additional performance penalty.
An Ant Colony Optimization Algorithm Based on Mutation and Pheromone Diffusion for the Multidimensional Knapsack Problems
Ji Junzhong, Huang Zhen, and Liu Chunnian
2009, 46(4):  644-654. 
Asbtract ( 650 )   HTML ( 1)   PDF (1514KB) ( 568 )  
Related Articles | Metrics
Ant colony optimization (ACO) algorithm is a metaheuristic and stochastic search technology, which has been one of the effective tools for solving discrete optimization problems. The multidimensional knapsack problem (MKP) is a classical combinatorial optimization problem, whose goal is to find a subset of objects that maximizes the total profit while satisfying some resource constraints. There are many algorithms to effectively solve MKPs, where ACO algorithm is a new and robust method. However, there are two bottlenecks in which the ACO algorithm takes too much time and solves imprecisely for large-scaled MKP. In this paper, a novel ACO algorithm with high performance is proposed, which syncretizes the improvements of the pheromone updating and the stochastic search scheme. Firstly, in light of the preference to better solutions, the association distances among objects are mined in each cycle with a Top-k strategy. Then a pheromone diffusion model based on information fountain of an object is established, which strengthens the collaborations among ants. Finally, a simple mutation scheme is employed to optimize the evolution results of each step. The experimental results on the benchmark testing set of MKPs show that the proposed algorithm can not only get much more optimal solutions but also greatly enhance convergence speed compared with the latest improved algorithms.
Research on an ε-Domination Based Orthogonal Differential Evolution Algorithm for Multi-Objective Optimization
Gong Wenyin and Cai Zhihua
2009, 46(4):  655-666. 
Asbtract ( 552 )   HTML ( 0)   PDF (1625KB) ( 762 )  
Related Articles | Metrics
Evolutionary multi-objective optimization (EMO) has become a very popular topic in the last few years. However, to design an efficient and effective EMO algorithm to find the near-optimal and near-complete Pareto front is a challenging task. In this paper, a novel differential evolution algorithm is proposed to solve multi-objective optimization problems (MOPs) efficiently. The proposed approach uses an archive population to retain the obtained non-dominated solutions; also it adopts the orthogonal design method with quantization technique to generate an initial population of points that are scattered uniformly over the feasible solution space, so that the algorithm can evenly scan the feasible solution space to locate good points for further exploration in subsequent iterations. Moreover, it is based on the ε-dominance concept to obtain a good distribution of Pareto-optimal solutions in a small computational time. To make the algorithm converge faster, the new approach employs a hybrid selection mechanism in which a random selection and an elitist selection are interleaved. Experiments on eight benchmark problems of diverse complexities show that the new approach is able to obtain a good distribution in all cases. Compared with several other state-of-the-art evolutionary algorithms, it achieves not only comparable results in terms of convergence and diversity metrics, but also a considerable reduction of the computational effort. Furthermore, the influences of different CR value and the parameter value of hybrid selection mechanism on the performance of the algorithm are discussed experimentally.
JLU-RLAO and JLU-QLAO: Two Non-Deterministic Planners
Sun Jigui, Yin Minghao, and Lü Shuai,
2009, 46(4):  667-675. 
Asbtract ( 1188 )   HTML ( 0)   PDF (924KB) ( 465 )  
Related Articles | Metrics
Classical decision-theoretic planning methods assume that the probabilistic model of the domain is always accurate. Unfortunately, for lack of information, sometimes planning modeling experts can only obtain incomplete quantitative information, or even ordinal, qualitative information for modeling the uncertainty about the world transition. Recently, LAO* has been proved to be one of the most efficient planners for solving probabilistic planning problems. Two algorithms, namely rLAO* algorithm and qLAO* algorithm, are introduced to solve non-deterministic planning problems without complete information based on LAO*. Specifically, rLAO* algorithm can solve planning problems under uncertainty with incomplete quantitative information, and qLAO* algorithm can solve planning problems under uncertainty with qualitative information. Both these two algorithms are proved to be sound and complete. Both algorithms have been implemented in the framework of two un-deterministic planners “JLU-RLAO” and “JLU-QLAO”, and compared with LAO* using a lot of benchmark problems. Experimental results show that both systems inherit the merits of excellent performance of LAO* for solving planning problems under uncertainty. Because JLU-RLAO and JLU-QLAO planners can solve planning problems under uncertainty with incomplete information, they can be regarded as complementary planners to LAO*.
A Novel Unified Manifold Learning Framework and an Improved Laplacian Eigenmap
Hou Chenping, Wu Yi, and Yi Dongyun
2009, 46(4):  676-682. 
Asbtract ( 781 )   HTML ( 0)   PDF (1171KB) ( 641 )  
Related Articles | Metrics
Manifold learning is crucial in many research fields, such as pattern recognition, data mining, computer version, etc. However, there is little work focusing on developing a common framework which can unify all approaches. Meanwhile, since Laplacian eigenmap (LE) is a local manifold learning approach, it is very sensitive to the size of neighbors. Considering all kinds of manifold learning approaches, a novel unified manifold learning framework is proposed in this paper. It consists of two functional items, i.e., the maintaining item and the expecting item. Most approaches can be analyzed and improved within this framework. For illustration, LE is analyzed within the proposed framework. An improved Laplacian eigenmap (ILE) is then presented. It is mainly based on LE and maximum variance unfolding (MVU). The local character of graph Laplacian, which is referred to as maintaining item, is kept. The variances between any two points, which correspond to the expecting items, are maximized. ILE inherits the advantages of LE and MVU. Compared with LE, it is not so sensitive to the size of neighbors. And too strict local constraint of MVU is also relaxed. Moreover, ILE can also maintain the clustering property and discover the intrinsic character of original data. Several experiments on both toy examples and the real data sets are given for illustration.
Strategy of Revising Rules for Association Text Classification
Qiu Jiangtao, Tang Changjie, Zeng Tao, and Liu Yintian
2009, 46(4):  683-688. 
Asbtract ( 520 )   HTML ( 0)   PDF (768KB) ( 579 )  
Related Articles | Metrics
Text classification is an important field in data mining and machine learning. In recent years, the use of association rules for text categorization has attracted great interest and a variety of useful methods have been developed. These works focus on how to generate classification rules and then pick rules to build a high accuracy classifier. By analyzing association-rule based text classification, an observation may be obtained that decreasing error classification for negative samples may improve classification accuracy while keeping categorizing positive samples unchanged. Inspired by negative selection algorithm, the authors propose a classification rule revising strategy to implement the above observation. First, a new rule, called negative rule, is generated by mining frequent item sets on negative samples that are error categorized by a classification rule. Then the classification rule is combined with its negative rules to generate an enhanced association rule. The enhanced association rules can dramatically decrease error categorization for negative samples, and therefore classification accuracy is improved. Experiments are conducted on a real Web pages dataset. Compared with text classification algorithms (CMAR, S-EM and NB), the rule revising strategy may further improve classification accuracy. The utility and feasibility of the revising rule strategy are also demonstrated by formalization proof.
Error Correction for Handwritten Mathematical Expression Recognition by Pen and Speech
Jiang Yingying, Ao Xiang, Tian Feng, Wang Xugang, and Dai Guozhong,
2009, 46(4):  689-697. 
Asbtract ( 728 )   HTML ( 2)   PDF (1378KB) ( 573 )  
Related Articles | Metrics
As recognition-based interfaces are error prone, it is important to provide a natural and efficient error correction method for these interfaces. Handwritten mathematical expressions have 2D structures, and it is challenging to recognize them and correct their recognition errors. In this paper, a multimodal error correction technique is introduced for handwritten mathematical expression recognition. It allows users to correct errors by pen and speech. Symbol segmentation errors could be corrected by pen. Symbol recognition errors and structure analysis errors could be corrected by pen or by pen and speech. Users could firstly select an error by pen and then tell the corresponding mathematical term or mathematical symbol by speech. The key of the proposed technique is a multimodal fusion algorithm which fuses handwriting and speech recognition results. The input to the fusion algorithm is the speech and the symbols selected by pen. According to whether the speech input is a mathematical term or a mathematical symbol’s name, the algorithm chooses a specific fusion method to adjust the handwritten expression and get the most likely result. Evaluation shows that the proposed multimodal error correction technique is effective, and it can help users to correct errors in mathematical expression recognition more efficiently than the unimodal pen-based error correction technique.
An Address Register Promotion Method Based on Feedbacks
Zhang Chao, Lü Fang, Wang Lei, and Feng Xiaobing
2009, 46(4):  698-704. 
Asbtract ( 539 )   HTML ( 0)   PDF (1539KB) ( 527 )  
Related Articles | Metrics
In processor architectures such as MIPS, ALPHA, SPARC and PowerPC, indirect addressing mode is always adopted to access global variables and static ones. Since the addresses of these variables and the corresponding values are in different data sections in the corresponding binary file, the data locality of the program will be very poor. As a result, accessing the read only addresses of these variables every time tends to result in non-trivial redundant data cache miss memory accesses. Moreover, such indirect addressing mode will generate two sequential load instructions which have data dependences between them. As a result, the amount of instruction level parallelism (ILP) of the program will be decreased. The authors present an address register promotion nethod based on feedbacks (ARPF) to solve the above problems. ARPF algorithm reduces the redundant accesses to the read only addresses of the global variables and static ones, increases the amount of instruction level parallelism of a program, and avoids the performance declines due to the increase in register pressure caused by register promotion. The algorithm has been implemented in the Loongson compiler for MIPS architecture. Experiments on SPEC CPU2000INT benchmarks are conducted to show that ARPF can improve the performance of all benchmarks by 1%~6%.