Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2008, Volume 45 Issue 5
Paper
Dynamic Intrusion Response Based on Game Theory
Shi Jin, Lu Yin, and Xie Li
2008, 45(5):  747-757. 
Asbtract ( 443 )   HTML ( 0)   PDF (1239KB) ( 513 )  
Related Articles | Metrics
With recent advances in network based technology and increased dependability of every day life on this technology, assuring reliable operation of network based systems is very important. During recent years, number of attacks on networks has dramatically increased and consequently interest in network intrusion detection and response has increased among the researchers. But as other network security technologies are being widely applied and achieving good results, intrusion detection and response technology is lagging. One reason is that current intrusion detection technology is limited in the detecting algorithm itself, the other is that system’s incentive and alternation of attacker’s strategies isn’t taken into consideration sufficiently in current alerts response research. A dynamic intrusion response model based on game theory (DIRBGT) is proposed to solve the second problem. On the one hand, DIRBGT takes account of incentives of system and attacker across the board, therefore the incentive of system can be assured. And on the other hand, it deals well with attack’s intent and alternation of strategies and therefore the optimal answer is stable and reliable while the optimal responses inferred from systems alone are unstable. The experimental results show that the DIRBGT model can effectively improve the accuracy and effectiveness of alert response.
STBAC: A New Access Control Model for Operating System
Shan Zhiyong and Shi Wenchang
2008, 45(5):  758-764. 
Asbtract ( 481 )   HTML ( 0)   PDF (909KB) ( 394 )  
Related Articles | Metrics
With the rapid development and increasing use of network, threats to modern operating systems mostly come from network, such as buffer overflows, viruses, worms, Trojans, DOS, etc. On the other hand, as computers, especially PCs, become cheaper and easier to use, people prefer to use computers exclusively and share information through network. The traditional access control mechanisms, however, can not deal with them in a smart way. Traditional DAC in OS alone cannot defeat network attacks well. Traditional MAC is effective in maintaining security, but it has problems of application incompatibility and administration complexity. To this end, a new access control model named STBAC for operating system is proposed which can defeat attacks from network while maintaining good compatibility, simplicity and performance. Even in the cases when some processes are subverted, STBAC can still protect vital resources, so that the intruder cannot reach his/her final goal. STBAC regards processes that have done nontrustablecommunication as starting points of suspicious taint, traces the activities of the suspiciously tainted processes and their child processes by taint rules, and forbids the suspiciously tainted processes to illegally access vital resources by protection rules. The tests on the STBAC prototype show that it can protect system security effectively without imposing heavy compatibility and performance impact upon operating system.
A Trust Valuation Model in MANET
Ye Ayong, and Ma Jianfeng
2008, 45(5):  765-771. 
Asbtract ( 543 )   HTML ( 1)   PDF (763KB) ( 533 )  
Related Articles | Metrics
Being infrastructureless, node trust plays an important role in security and reliability of mobile ad hoc networks. A new trust valuation model based on the experience of node is given to evaluate trustworthiness between network nodes. To improve the accuracy and rationality of node trust evaluation in a mobile network, the evidence theory is introduced to evaluate trust and combine multilateral experience from others. In addition, observationframe is introduced to incorporate experience’s timesensitivity, which provides adequate support to cope with strategically altering behaviors of malicious nodes efficiently. A lowcost recommendation technique based on sleep mechanism is provided for neighboring nodes to share experience information, which obtains faster convergence rate and reduces the whole energy dissipation. Together, recommendation trust is quantitatively evaluated by a fuzzy similarity measure, which significantly increases the resilience against dishonest feedbacks. In comparison with the existing works, a complete trust valuation model is designed, with emphasis on its adaptability to dynamics of trust, robustness and resourcesaving. This model can be used in coordination and security decision for network services. Finally, the theoretical analysis and simulations is given to evaluate the proposed techniques.
A Robust Watermarking Scheme Based on Image Feature and PseudoZernike Moments
Wang Xiangyang, Hou Limin, and Yang Hongying
2008, 45(5):  772-778. 
Asbtract ( 524 )   HTML ( 3)   PDF (851KB) ( 468 )  
Related Articles | Metrics
Digital watermarking, as an efficient supplemental method of traditional cryptographic system, has been an important technique for intellectual property protection of digital multimedia. Nowadays, there is an unprecedented development in the image watermarking field. On the other hand, attacks against watermarking systems have become more sophisticated. In general, these attacks can be categorized into common signal processing and geometric distortion. Geometric distortion is known as one of the most difficult attacks to resist. Geometric distortion desynchronizes the location of the watermark and hence causes incorrect watermark detection. Based on HarrisLaplace theory and pseudoZernike moments, a new featurebased image watermarking scheme robust to geometric attacks is proposed in this paper. Firstly, the HarrisLaplace detector is utilized to extract steady feature points from the host image; then, the local feature regions (LFR) are ascertained adaptively according to the feature scale theory, and they are scaled to a standard size; finally, the digital watermark are embedded into the local feature regions (LFR) by quantizing the magnitudes of the pseudoZernike moments. Experimental results show that the proposed scheme is not only invisible and robust against common signals processing such as median filtering, sharpening, noise adding, JPEG compression, etc., but also robust against the geometric attacks such as rotation, translation, scaling, row or column removal, shearing, local geometric distortion, combination attacks, etc.
Double Secret Keys and Double Random Numbers Authentication Scheme
Tian Junfeng, Jiao Hongqiang, Li Ning, and Liu Tao
2008, 45(5):  779-785. 
Asbtract ( 613 )   HTML ( 1)   PDF (739KB) ( 565 )  
Related Articles | Metrics
The computer network is an open system, and it leads to considerable security vulnerabilities and security threats in computer network. The network resources can easily be visited and illegally copied. So the identity authentication of the Web source visitor has become very important. In 1981, Lamport proposed an authentication scheme based on keywords table. This scheme can resist the replay attack, but will be not in security when the password stored in the host is attacked. Smart card can be more available to identity authentication. Many authentication schemes based on smart card are proposed for improving authentication efficiency and security. Firstly, Manik Lal Das’s authentication scheme is analyzed detailedly in this paper. It has time synchronization problem and vulnerable forgery attack. So, mutual authentication scheme based on bilinear pairings which is using smart card is proposed. A novel technique of using double secret keys and double random numbers for preventing forgery attack in authentication process is proposed. It enhances the security of the authentication system and accomplishes mutual authentication safely between the user and the remote system. Finally, the scheme finishes the correctness attestation, and security and computation complexity analysis.
BackboneBased Relative Positioning in Ad Hoc Networks
Tian Mingjun, Zhao Dan, Wang Jingxuan, and Yan Wei
2008, 45(5):  786-793. 
Asbtract ( 480 )   HTML ( 0)   PDF (795KB) ( 489 )  
Related Articles | Metrics
Relative positioning is a hot topic in ad hoc network. And self positioning algorithm is an essential work in this field. After SPA was proposed, people have done a lot of works in order to get better accuracy while reducing the communication cost. Most of these works retain the main architecture of SPA. One of them named cluster based approach decreases the communication cost of SPA successfully. But it has big problems in terms of accuracy. When merging two neighboring local coordinate systems, the results are ambitious. And it thus causes low accuracy in positioning especially when there is a large node number. Proposed in paper is a new relative positioning algorithm, BBA, which is backbonebased. The backbonebased algorithm contains mainly three steps: firstly, a part of nodes is selected to establish a backbone network, then every node on the backbone builds a local coordinate system, finally, all the local coordinate systems are merged to a global one. Simulation results show that compared with cluster based approach, the BBA algorithm not only decreases the communication cost but also is highly accurate. Additionally, the BBA algorithm reduces the ratio of the nodes that involve computing positions, which is also considered an important improvement.
An Improved Transport Layer Identification of PeertoPeer Traffic
Xu Peng, Liu Qiong, and Lin Sen
2008, 45(5):  794-802. 
Asbtract ( 523 )   HTML ( 1)   PDF (1957KB) ( 424 )  
Related Articles | Metrics
Peer to peer (P2P) traffic identification is a hot topic in network measurement in recent years. The identification method based on P2P traffic transport layer behavior has good scalability, because it is independent of the signature strings of P2P application. But the network application’s behavior in transport layer is easy to be affected by network environment, so there is a great difference in the accuracy of this identification method between domestic and overseas network environment. In order to improve the existing transport layer identification method in domestic network environment, three proposals are offered in this paper. The first is a filtering mechanism based on nonP2P known port. The second is a counting mechanism using data flow. The third is an FTP flow filtering mechanism using reversed flow. Then, these proposals are validated using the domestic traces. The result of experiments indicates that the flow accuracy and bytes accuracy of the improved P2P traffic transport layer identification method approach 95% and 99% respectively. Finally, this improved method is firstly used to analyze the trace of the Internet backbone in China Education and Research Network. The result of measurement shows that the volume of P2P traffic increases from 0.76% roughly to 70% of the total traffic in the backbone.
TBSN: A Taxonomy Hierarchy Based P2P Network
Qiao Baiyou, Wang Guoren, and Ding Linlin
2008, 45(5):  803-809. 
Asbtract ( 569 )   HTML ( 1)   PDF (806KB) ( 475 )  
Related Articles | Metrics
Constructing semantic overlay networks is an important way to support semanticsbased search and enhance search performance and scalability of the P2P networks. The existing P2P semantic overlay networks based on the taxonomy hierarchies can not fully utilize the semantic information contained in the taxonomy hierarchies. Therefore, a taxonomy hierarchy based P2P network (TBSN) is presented in this paper, which fully considers the characteristics of data sources employing taxonomy hierarchy to describe the contents of their objects. It dynamically clusters peers into deferent semantic clusters based on the semantic information contained in the taxonomy hierarchy, and organizes the semantic clusters into semantic routing overlays, thus forming a semanticsbased P2P network. Each semantic cluster consists of a superpeer node and more peer nodes, and only responsible for answering queries in its semantic subspace. A query is firstly routed to the appropriate semantic clusters by an efficient searching algorithm, and then forwarded to the specific peers that hold the relevant data objects; so the peers involved and messages to send are reduced and the network performance is greatly enhanced. Preliminary evaluation shows that TBSN achieves a competitive tradeoff between search performance and overheads, and load balance among clusters and data semantics within a cluster is all well maintained.
A DelayConstrained Steiner Tree Algorithm Using MPH
Zhou Ling, and Sun Yamin
2008, 45(5):  810-816. 
Asbtract ( 669 )   HTML ( 1)   PDF (608KB) ( 550 )  
Related Articles | Metrics
Multicast routing algorithm has received considerable attention from researchers in computer communication. In most conditions, it is NPcomplete and defined as a Steiner tree problem. In order to optimize cost and decrease time complexity with a delay upper bound, the delayconstrained Steiner tree problem is addressed. Time complexity of minimum path heuristic (MPH) algorithm is analyzed firstly, and then a delayconstrained leastcost (DCLC) multicast routing algorithm called DCMPH is presented to construct DCLC multicast tree. With DCMPH a computing member node can join the multicast tree by selecting the path whose cost is the least to the existing multicast tree; if the path’s delay destroys the delay upper bound, the leastdelay path computed by shortest path tree (SPT) algorithm is used to take the place of the leastcost path to join the current multicast tree. By this way, a lowcost multicast routing tree can be constructed and the delay upper bound isn’t destroyed. The correctness of DCMPH is proved by mathematical induction and the time complexity is analyzed in theory. Simulation results show that DCMPH is highperformance in constructing DCLC multicast routing tree and has a lower time complexity than many other DCLC multicast routing algorithms.
A Survey on Operating System Power Management
Zhao Xia, Chen Xiangqun, Guo Yao, and Yang Fuqing
2008, 45(5):  817-824. 
Asbtract ( 824 )   HTML ( 2)   PDF (809KB) ( 569 )  
Related Articles | Metrics
With the rapid development of microelectronics and mobile computing technology, reducing power dissipation of computing systems has become a hot topic for both academia and industry. As the resource manager of systems, operating system power management makes decision based on the characteristics of workload and hardware components, in order to reduce energy consumption while satisfying the performance constraints of applications simultaneously. Because of the diversity of applications and complexity of multitask operating system concurrency, OS dynamic power management strategy is facing workload uncertainty. One of the key issues becomes the tradeoff between performance and energy consumption. In this paper, the OS dynamic power management techniques are surveyed from the directions of optimal control strategy and operating system design. From the perspective of optimal control strategy, an OS power management subsystem model and two types of dynamic power management strategies, namely dynamic power management and dynamic voltage scaling are summarized in detail. From the perspective of operating system design, how to abstract power resources and how to design power management mechanisms and strategies in existing OS resources management framework are also important issues. Some key ideas and recent progresses on these issues are discussed. Finally, several challenges and open issues in OS power management are summarized.
A Formal Certifying Framework for Assembly Programs
Li Zhaopeng, Chen Yiyun, Ge Lin, and Hua Baojian
2008, 45(5):  825-833. 
Asbtract ( 499 )   HTML ( 0)   PDF (933KB) ( 459 )  
Related Articles | Metrics
Proofcarrying code brings two grand challenges to the research field of programming languages. One is to study the technology of certifying compilation. The other is to seek more expressive program logics or type systems to specify or reason about the properties of highlevel or lowlevel programs. And safety is an important issue among the properties of highassurance software. The verification method for software to meet its safety policies is one of the hot researches. In terms of the framework to design and verification of safety programs, and the pointer logic proof system, this paper introduces the research on the formal description of target machine, the formal certifying framework for assembly programs and property proof of assembly pointer programs. The main characteristics of the design and implementation are as follows: first, the design of the certifying framework is based on program verification method of Hoare style; second, program property related with pointers is proved using a pointer logic which is similar to the counterpart in the level of source language; and finally, a simple type system is designed to fulfill type checking on pointers. Moreover, this work has been formalized in the proof assistant Coq and all code is available on the website of the authors’ laboratory.
Software Pipelining with Cache Profiling Information
Zhou Qian, Feng Xiaobing, and Zhang Zhaoqing
2008, 45(5):  834-840. 
Asbtract ( 519 )   HTML ( 1)   PDF (890KB) ( 428 )  
Related Articles | Metrics
Software pipelining is an important instruction scheduling technique. It tries to improve the performance of a loop by overlapping the execution of several successive iterations. As the gap between the speed of processor and memory becomes larger and larger, memory access instructions, especially the instructions which cause cache miss, become the bottleneck that restricts high performance. As these instructions’s latency is not fixed, it is very important to predict and hide the latency of these memory access instructions. Unlike the method used by others, cache profiling technique is introduced, collecting runtime information to predict memory access latency, and to schedule accordingly. When increasing the memory access latency in the software pipelined loop, the initial interval may also increase, thus the performance may not increase. The CSMS and FLMS algorithms are trying to change the memory access latency without increasing the initial interval. The CSMS and FLMS algorithms are improved, changing the memory access latency according to cache profiling information, so it is more accurate than the method used before. Experiment result shows that the new method can improve the performance effectively, increasing performance of SPEC2000 1% on average, some case being as high as 11%.
A New Scheduling Algorithm in Grid Based on Dynamic Decisive Path
Lin Jianning and Wu Huizhong
2008, 45(5):  841-847. 
Asbtract ( 634 )   HTML ( 0)   PDF (750KB) ( 657 )  
Related Articles | Metrics
Grid environment is an open, dynamic and changeful application environment, in which task scheduling is a hot topic of grid environment research in recent years. Task scheduling in grid environment is a NP problem. How to choose effective resource to run the tasks is an important problem. Although some iterative methods, such as GA, can solve it effectively. However it will spend too much time scheduling too many tasks. And some custom heuristic algorithms often cause the spare time slots in the resource. So in this paper a new heuristic algorithm is addressed based on the idea which is scheduling the tasks first, and then optimizing them. First it uses the common heuristic algorithm to schedule the tasks, and then a new DAG can be rebuilt and the decisive tasks and decisive path can be constructed. After that the decisive tasks will be rescheduled to the new resource which includes the fit spare time slots in order to advance the decisive task and its child tasks. Also adopted in this paper is a new method to judge the deadlock between tasks in the DAG so that the tasks could be completed normally. Simulation tests prove that the heuristic algorithm can tackle the NP problem in a simple and efficient way.
A Novel Algorithm of Simultaneous Localization and Map Building (SLAM) with Particle Filter
Guo Jianhui and Zhao Chunxia
2008, 45(5):  853-860. 
Asbtract ( 957 )   HTML ( 1)   PDF (723KB) ( 808 )  
Related Articles | Metrics
The computational complexity of the most popular particle filtering SLAM algorithms are linear proportional to the number of landmarks, which have obvious computational superiority for dense map or largescale SLAM . However, there is no guarantee that the computed covariance will match the actual estimation errors, which is the true SLAM consistency problem. The lack of consistency of these algorithms will lead to filter divergence. In order to ensure consistency, a new particle filtering SLAM algorithm is proposed, which is based on the marginal particle filtering and using unscented Kalman filtering (UKF) to generate proposal distributions. The underlying algorithm operates directly on the marginal distribution, hence avoiding having to perform importance sampling on a space of growing dimension. Additionally, UKF can reduce linearization error and gain accurate proposal distributions. Compared with the common particle filtering SLAM methods, the new algorithm increases the number of effective particles and reduces variance of particles weight effectively. Also, it is consistent owing to the better particle diversity. As a result, it does not suffer from some shortcomings of existing particle methods for SLAM and has distinct superiority. Finally, plentiful simulations are carried out to evaluate the algorithm’s performance and the results indicate that the algorithm is valid.
Cross Media Correlation Reasoning and Retrieval
Zhang Hong, Wu Fei, and Zhuang Yueting
2008, 45(5):  869-876. 
Asbtract ( 814 )   HTML ( 1)   PDF (1234KB) ( 614 )  
Related Articles | Metrics
A cross media retrieval approach is proposed to solve the problem of cross media correlation measuring between different modalities, such as image and audio data. First both intra and crossmedia correlations among multimodality datasets are explored. Intramedia correlation measures the similarity between multimedia data of the same modality, and crossmedia correlation measures how similar in semantic level two multimedia objects of different modalities are. Cross media correlation is very difficult to measure because of the heterogeneity in lowlevel features. For example, images are represented with visual feature vectors and audio clips are represented with heterogeneous auditory feature vectors. Intramedia correlation is calculated based on geodesic distance, and crossmedia correlation is estimated according to link information among WebPages. Then both kinds of correlations are formalized in a crossmedia correlation graph. Based on this graph crossmedia retrieval is enabled by the weight of the shortest path. A unique relevance feedback technique is developed to update the knowledge of multimodal correlations by learning from user behaviors, and to enhance the retrieval performance in a progressive manner. This approach breakthroughs the limitation of modality during retrieval process, and is applicable for querybyexample and crossretrieval multimedia applications. Experiment results on imageaudio dataset are encouraging, and show that the performance of the approach is effective.
A CapacityShared Heterogeneous CMP Cache
Gao Xiang, Zhang Longbing , and Hu Weiwu
2008, 45(5):  877-885. 
Asbtract ( 617 )   HTML ( 0)   PDF (1696KB) ( 448 )  
Related Articles | Metrics
The characteristics of advanced integrated circuit technologies require architects to look for new ways to utilize large numbers of gates and mitigate the effects of high interconnect delays. Chip multiprocessors (CMPs) exploit increasing transistor counts by placing multiple processors on a single die. As the chip multiprocessors (CMPs) have become the trend of high performance microprocessors, the target workloads become more and more diversified. Due to the wire delay problem and diversity of applications, neither private nor shared caches can provide both large capacity and fast access in CMPs. A novel CMP cache design, the heterogeneous CMP cache (HCC) is presented, in which chips are constructed by tiles of two different categories. L2 caches of private tiles provide lowest hit latency and L2 cache of shared tiles increases the effective cache capacity for shared data. Incorporating indirectindex cache technology to share capacity between different hierarchies, HCC provide a both capacityeffective and access fast on chip memory subsystem. Detailed fullsystem simulations are used to analyze the HCC performance for various programs, including SPEC CPU2000, SPLASH2 and commercial workloads. The result shows that HCC improves performance by 16% for singlethreaded benchmarks and 9% for multithread benchmarks. HCC is easy to implement and the design ideas will be used in the future multicore processors of Godson series.
Research on UserOriented Availability Modeling in Parallel Computer Systems
Zheng Fang, Zheng Xiao, Li Hongliang, and Chen Zuoning
2008, 45(5):  886-894. 
Asbtract ( 521 )   HTML ( 0)   PDF (926KB) ( 373 )  
Related Articles | Metrics
The scale of parallel computer systems is even larger. The dependability of the system and the tasks face the great challenges in the situation. The availability include the reliability and serviceability, thereby it is the core specification of describing the correct service capabilities in a massively parallel computer system. The quantitative evaluation of availability of massively parallel computer system is significant for system analysis and design. The useroriented availability models of parallel computer system which consider task characters and fault tolerance strategy are established by stochastic activity networks for two different examples in this paper: one is capability computing application with frequent communication among nodes, and the other is capacity computing application without communication. These models based on node module and networks module describe task running states and use useful work rate to measure the availability degree. The model includes the main factors that influence the availability of parallel computer system, which involve failure, hierarchical faulttolerance, fault detect, application characteristics, repair strategy and faulty coverage ratio, etc. Then, the model is computed and analyzed with the actual data. The models can evaluate the useroriented availability quantitatively, especially when the tasks are different and the parallel computer systems are the same.
A New Algorithm for DTDs Absolute Consistency Checking
Lu Yan and Hao Zhongxiao,
2008, 45(5):  895-900. 
Asbtract ( 394 )   HTML ( 0)   PDF (432KB) ( 346 )  
Related Articles | Metrics
A document type definition (DTD) describes the structure of a set of similar XML documents and serves as the schema for XML documents. Consistency of DTDs is an important topic in XML research. A DTD is consistent if and only if there is some valid XML document conforming to the DTD, while a DTD is inconsistent if there is no XML document conforming to it. Inconsistent DTDs are of no use and should be avoided as well as possible. However, a consistent DTD may have inconsistent substructures that no valid XML data could conform to. This kind of DTDs should be avoided as well as inconsistent DTDs. In order to solve this problem, a new notion of “element consistency in a DTD” is put forward in this paper. Based on “element consistency in a DTD”, notion of “absolute consistent DTDs” , which means consistent DTDs with no inconsistent substructures, is discussed. Furthermore, a new DTDs absolute consistency checking algorithm, with which a DTD can be determined absolute consistent or not consistent quickly, is also offered. The worst time complexity of the new DTDs absolute consistency checking algorithm is O(n).
NDSMMV—A New Dynamic Selection Strategy of Materialized Views for MultiDimensional Data
Zhang Dongzhan, Huang Zongyi, and Xue Yongsheng
2008, 45(5):  901-908. 
Asbtract ( 495 )   HTML ( 0)   PDF (751KB) ( 443 )  
Related Articles | Metrics
The selection strategy of materialized view is one of the important issues of data warehouse research. Its goal is to elect a group of materialized views, which could cut down the cost of the query greatly on the basis of the limited storage space. The cost model is proposed at first. Then, a new dynamic selection strategy of materialized views for multidimensional data (NDSMMV) is presented, which is composed of four algorithms: CVGA (candidate view generation algorithm), IGA (improved greedy algorithm), MAMV (modulation algorithm of materialized views) and DMAMV (dynamic modulation algorithm of materialized views). CVGA generates the candidate view set based on multidimensional data lattice, which reduces the number of candidate views to decrease the space search cost and time consumption of the following algorithm. IGA selects materialized views taking account of view query, view maintenance and space constraint. MAMV modulate the materialized views according to the change of the materialized view profit, which improves the capability of querying materialized views. DMAMV uses the sample space to judge whether it is necessary to change the view set which can avoid sharp dither. The comparative experiment indicates that NDSMMV operates more effectively than BPUS and FPUS in the respect that CVGA reduces the amount of views beforehand. IGA selects the materialized views quickly, MAMV modulates the materialized views accurately, and the query expense decreases further with the modulation of the DMAMV on line, which validates the efficiency of NDSMMV.
Extraction and Removal of Frame Line in Form Bill
Zhang Yan, Yu Shengyang, Zhang Chongyang , and Yang Jingyu
2008, 45(5):  909-914. 
Asbtract ( 826 )   HTML ( 0)   PDF (918KB) ( 1204 )  
Related Articles | Metrics
In practical form bill images, characters usually overlap with the form frames, which will greatly affect the performance of the document image autoprocessing system. Most of the form frame line removal algorithms are based on binary images, which can not make good use of line characteristics in gray images. According to the attribute of financial documents’ structure, an improved line detection and removal algorithm applied in financial form image preprocessing is proposed in this paper. In order to reduce the complexity and improve the effect of line removal, the process of line detection and removal are carried out respectively. First, frame lines are exactly detected according to the line characteristics in gray images. Then chain code method is used to describe the frame line region. Crosspoints of characters and lines are detected subsequently with deterministic finite automaton in order to analyse the overlapping types. Finally, frame lines are removed with the marks in crosspoints detection. Therefore, the limitation of stroke aberrance caused by thresholding is overcome and higher accuracy of line removal can be achieved. The results of experiment demonstrate that compared with different existing methods based on handwritten digit character recognition, the proposed algorithm is efficient and robust.
A New Approach to Ridgelet Transform
Zhao Xiaoming, and Ye Xijian
2008, 45(5):  915-922. 
Asbtract ( 507 )   HTML ( 0)   PDF (1159KB) ( 325 )  
Related Articles | Metrics
Wavelet transform is suitable for expressing local characteristics of the object which has isotropic singularity. but to the anisotropic singularity, wavelet is not the best tool because it will cause the blur of image edges and details. Ridgelet transform actually is a wavelet basis function with a added parameter which is characterized direction. It has the same ability in local timefrequency resolution as wavelet transform. Meanwhile, ridgelet transform has a strong ability to identify and choose the direction. So it is an effective method to express the local characteristics of the object which has anisotropic singularity. But each of them is applied ineffectively to the local characteristics suitable for the other. Presented in this paper is an improved multiresolution method based on ridgelet theory, that is, quasiridgelet multiresolution analysis method. This method unifies wavelet theory and ridgelet theory, and makes wavelet theory and ridgelet theory to be its two special cases. Meanwhile, it has the discernment of isotropic and isometric singularity object. Thus the transformation method is able to maintain the ridgelet theory possessing superiority on the line characteristic detection, and enhances the point characteristic detection at the same time. By experimental comparison, it is shown that the effects of this method on combining the advantages and evading the disadvantages of wavelet theory and ridgelet theory are quite obvious, having more flexibility in the application of eliminating image noise.